Mozilla Open Policy & Advocacy Blog: Data localization: bad for users, business, and security |
Mozilla is deeply concerned by news reports that India’s first data protection law may include data localization requirements. Recent leaks suggest that the Justice Srikrishna Committee, the group charged by the Government of India with developing the country’s first data protection law, is considering requiring companies subject to the law to store critical personal data within India’s borders. A data localization mandate would undermine user security, harm the growth and competitiveness of Indian industry, and potentially burden relations between India and other countries. We urge the Srikrishna Committee and the Government of India to exclude this in the forthcoming legislative proposal.
Security Risks
Locating data within a given jurisdiction does not in itself convey any security benefits; rather, the law should require data controllers to strictly protect the data that they’re entrusted with. One only has to look to the recurring breaches of the Aadhaar demographic data to understand that storing data locally does not, by itself, keep data protected (see here, here and here). Until India has a data protection law and demonstrates robust enforcement of that law, it’s difficult to see how storing user data in India would be the most secure option.
In Puttaswamy, the Supreme Court of India unequivocally stated that privacy is a fundamental right, and put forth a proportionality standard that has profound implications for government surveillance in India. We respectfully recommend that if Indian authorities are concerned about law enforcement access to data, then a legal framework for surveillance with appropriate protections for users is a necessary first step. This would provide the lawful basis for the government to access data necessary for legal proceedings. A data localization mandate is an insufficient instrument for ensuring data access for the legitimate purposes of law enforcement.
Economic and Political Harms
A data localization mandate may also harm the Indian economy. India is home to many inspiring companies that are seeking to move beyond India’s generous borders. Requiring these companies to store data locally may thwart this expansion, and may introduce a tax on Indian industry by requiring them to maintain the legal and technical regimes of multiple jurisdictions.
Most Indian companies handle critical personal data, so even data localization for just this data subset could harm Indian industry. Such a mandate would force companies to use potentially cost-inefficient data storage and deny companies from using the most effective and efficient routing possible. Moreover, the Indian outsourcing industry is predicated on the idea of these firms being able to store and process data in India, and then transfer it to companies abroad. A data localization mandate could pose an existential risk to these companies.
At the same time, if India imposes data localization on foreign companies doing business in India, other countries may impose reciprocal data localization policies that force Indian companies to store user data within that country’s jurisdictional borders, leading to legal conflict and potential breakdown of trade.
Data Transfer, Not Data Localization
There are better alternatives to ensuring user data protection. Above all, obtaining an adequacy determination from the EU would both demonstrate commitment to a global high standard of data protection, and significantly benefit the Indian economy. Adequacy would allow Indian companies to more easily expand overseas, because they would already be compliant with the high standards of the GDPR. It would also open doors to foreign investment and expansion in the Indian market, as companies who are already GDPR-compliant could enter the Indian market with little to no additional compliance burden. Perhaps most significantly, this approach would make the joint EU-India market the largest in the world, thus creating opportunities for India to step into even greater economic global leadership.
If India does choose to enact data localization policies, we strongly urge it to also adopt provisions for transfer via Binding Corporate Rules (BCRs). This model has been successfully adopted by the EU, which allows for data transfer upon review and approval of a company’s data processing policies by the relevant Data Protection Authority (DPA). Through this process, user rights are protected, data is secured, and companies can still do business. However, adequacy offers substantial benefits over a BCR system. By giving all Indian companies the benefits of data transfer, rather than requiring each company to individually apply for approval from a DPA, Indian industry will likely be able to expand globally with fewer policy obstacles.
Necessity of a Strong Regulator
Whether considering user security or economic growth, data localization is a weak tool when compared to a strong data protection framework and regulator.
By creating strong incentives for companies to comply with data use, storage, and transfer regulations, a DPA that has enforcement power will get better data protection results than data localization, and won’t harm Indian users, industry, and innovation along the way. We remain hopeful that the Srikrishna Committee will craft a bill that centers on the user — this means strong privacy protections, strong obligations on public and private-sector data controllers, and a DPA that can enforce rules on behalf of all Indians.
The post Data localization: bad for users, business, and security appeared first on Open Policy & Advocacy.
https://blog.mozilla.org/netpolicy/2018/06/22/data-localization-india/
|
Support.Mozilla.Org: State of Mozilla Support: 2018 Mid-year Update – Part 1 |
As you may have heard, Mozilla held one of its All Hands biannual meetings, this time in San Francisco. The support.mozilla.org Admin team was there as well, along with several members of the support community.
The All Hands meetings are meant to be gatherings summarizing the work done and the challenges ahead. San Francisco was no different from that model. The four days of the All Hands were full of things to experience and participate in. Aside from all the plenary and “big stage” sessions – most of which you should be able to find at Air Mozilla soon – we also took part in many smaller (formal and informal) meetings, workshops, and chats.
By the way, if you watch Denelle’s presentation, you may hear something about Mozillians being awesome through helping users ;-).
This is the first in a series of posts summarizing what we talked about regarding support.mozilla.org, together with many (maaaaaany) pages of source content we have been working on and receiving from our research partners over the last few months.
We will begin with the summary of quite a heap of metrics, as delivered to us in by the analytics and research consultancy from Copenhagen – Analyse & Tal (Analysis & Numbers). You can find all the (105!) pages here but you can also read the summary below, which captures the most important information.
The A&T team used descriptive statistics (to tell a story using numbers) and network analysis (emphasizing interconnectedness and correlations), taking information from the 11 years of data available in Kitsune’s databases and 1 year of Google Analytics data.
Almost all perspectives of the analysis brought to the spotlight the amount of work contributed and the dedication of numerous Mozillians over many years. It’s hard to overstate the importance of that for Mozilla’s mission and continuous presence and support for the millions of users of open source software who want an open web. We are all equally proud and humbled that we can share this voyage with you.
As you can imagine, analyzing a project as complex and stretched in time as Mozilla’s Support site is quite challenging and we could not have done it without cooperation with Open Innovation and our external partners.
Most of the findings in the report support many anecdotal observations we have had, giving us a very powerful set of perspectives grounded in over 7 years’ worth of data. Based on the analysis, we are able to create future plans for our community that are more realistic and based on facts.
The A&T team provided us with a list of their recommendations:
Taking our own interpretation of the data analysis and the A&T recommendations into account, over the next few weeks we will be outlining more plans for the second half of the year, focusing on areas like:
As always, thank you for your patience and ongoing support of Mozilla’s mission. Stay tuned for more post-All Hands mid-year summaries and updates coming your way soon – and discuss them in the Contributors or Discourse forum threads.
https://blog.mozilla.org/sumo/2018/06/22/state-of-mozilla-support-2018-mid-year-update-part-1/
|
Mozilla VR Blog: This Week in Mixed Reality: Issue 10 |
Last week, the team was in San Francisco for an all-Mozilla company meeting.
This week the team is focusing on adding new features, making improvements and fixing bugs.
We are all hands on deck building more components and adding new UI across Firefox Reality:
Here is a preview that we showed off of the support for skybox and some of the new UX/UI:
We are continuing to provide a better experience across Hubs by Mozilla:
Join our public WebVR Slack #social channel to participate in on the discussion!
Found a critical bug? File it in our public GitHub repo or let us know on the public WebVR Slack #unity channel and as always, join us in our discussion!
Stay tuned for new features and improvements across our three areas!
|
Mozilla Open Policy & Advocacy Blog: Parliament adopts dangerous copyright proposal – but the battle continues |
On 20 June the European Parliament’s legal affairs committee (JURI) approved its report on the copyright directive, sending the controversial and dangerous copyright reform into its final stages of lawmaking.
Here is a statement from Raegan MacDonald, Mozilla’s Head of EU Public Policy:
“This is a sad day for the Internet in Europe. Lawmakers in the European Parliament have just voted for a new law that would effectively impose a universal monitoring obligation on users posting content online. As bad as that is, the Parliament’s vote would also introduce a ‘link tax’ that will undermine access to knowledge and the sharing of information in Europe.
It is especially disappointing that just a few weeks after the entry into force of the GDPR – a law that made Europe a global regulatory standard bearer – Parliamentarians have approved a law that will fundamentally damage the Internet in Europe, with global ramifications. But it’s not over yet – the final text still needs to be signed off by the Parliament plenary on 4 July. We call on Parliamentarians, and all those who care for an internet that fosters creativity and competition in Europe, to overturn these regressive provisions in July.”
Article 11 – where press publishers can demand a license fee for snippets of text online – passed by a slim majority of 13 to 12. The provision mandating upload filters for copyright content, Article 13, was adopted 15 to 10.
Mozilla will continue to fight for copyright that suits the 21st century and fosters creativity and competition online. We encourage anyone who shares these concerns to reach out to members of the European Parliament – you can call them directly via changecopyright.org, or tweet and email them at saveyourinternet.eu.
The post Parliament adopts dangerous copyright proposal – but the battle continues appeared first on Open Policy & Advocacy.
|
Mozilla Reps Community: Rep of the Month – May 2018 |
Please join us in congratulating Prathamesh Chavan, our Rep of the Month for May 2018!
Prathamesh is from Pune, India and works as a Technical Support Engineer at Red Hat. From his very early days in the Mozilla community, Prathamesh used his excellect people skills to spread the community to different colleges and to evangelise many of the upcoming projects, products and Mozilla initiatives. Prathamesh is also a very resourceful person. Due to this, he did a great job at organizing some great events at Pune and creare many new Mozilla Clubs across the city there.
As a Mozilla Reps Council member, Prathamesh has done some great work and has shown great leadership skills. He is always proactive in sharing important updates with the bigger community as well as raising his hand at every new initiative.
Thanks Prathamesh, keep rocking the Open Web!
Please congratulate him by heading over to the Discourse topic.
https://blog.mozilla.org/mozillareps/2018/06/22/rep-of-the-month-may-2018/
|
The Firefox Frontier: Open source isn’t just for software: Opensourcery recipe |
Firefox is one of the world’s most successful open source software projects. This means we make the code that runs Firefox available for anyone to modify and use so long … Read more
The post Open source isn’t just for software: Opensourcery recipe appeared first on The Firefox Frontier.
https://blog.mozilla.org/firefox/open-source-isnt-just-for-software-opensourcery-recipe/
|
K Lars Lohn: Things Gateway - the RESTful API and the Tide Light |
async with aiohttp.ClientSession() as session:
async with async_timeout.timeout(seconds_for_timeout):
async with session.put(
"http://gateway.local/things/{}/properties/color".format(thing_id),
headers={
'Accept': 'application/json',
'Authorization': 'Bearer {}'.format(things_gateway_auth_key),
'Content-Type': 'application/json'
},
data='{{"color": "{}"}}'.format(a_color)
) as response:
return await response.text()
$ curl -H "Authorization: Bearer XDZkRTVK2fLw...IVEMZiZ9Z" \
-H "Accept: application/json" --insecure \
http://gateway.local/things | json_pp | vim -
{
"properties" : {
"color" : {
"type" : "string",
"href" : "/things/zb-0017880103415d70/properties/color"
},
"on" : {
"type" : "boolean",
"href" : "/things/zb-0017880103415d70/properties/on"
}
},
"type" : "onOffColorLight",
"name" : "Tide Light",
"links" : [
{
"rel" : "properties",
"href" : "/things/zb-0017880103415d70/properties"
},
{
"href" : "/things/zb-0017880103415d70/actions",
"rel" : "actions"
},
{
"href" : "/things/zb-0017880103415d70/events",
"rel" : "events"
},
{
"rel" : "alternate",
"href" : "/things/zb-0017880103415d70",
"mediaType" : "text/html"
},
{
"rel" : "alternate",
"href" : "ws://gateway.local/things/zb-0017880103415d70"
}
],
"description" : "",
"href" : "/things/zb-0017880103415d70",
"actions" : {},
"events" : {}
}
Item | What's it for? | Where I got it |
---|---|---|
A Raspberry Pi running the Things Gateway with the associated hardware from Part 2 of this series. | This is the base platform that we'll be adding onto | From Part 2 of this series |
DIGI XStick | This allows the Raspberry Pi to talk the ZigBee protocol - there are several models, make sure you get the XU-Z11 model. | The only place that I could find this was Mouser Electronics |
Philips Hue White & Color Ambiance bulb | This will be the Tide Light. Set up one with a HUE Bridge with instructions from Part 4 of this series or independently from Part 5 of this series. | Home Depot |
Weather Underground developer account | This is where the tide data comes from. | The developer account is free and you can get one directly from Weather Underground. |
a computer with Python 3.6 | My tide_light.py code was written with Python 3.6. The RPi that runs the Things Gateway has only 3.5. To run my code, you'll need to either install 3.6 on the RPi or run the tide light on another machine. | My workstation has Python 3.6 by default |
Step 2: Recall that the Tide Light is to reflect the real time tide level at some configurable location. You need to select a location. Weather Underground doesn't supply tide data for every location. Unfortunately, I can't find a list of the locations for which they do supply information. You may have to do some trial and error. I was lucky and found good information on my first try: Waldport, OR.
$ sudo pip3 install configman
$ sudo pip3 install webthing
$ git clone https://github.com/twobraids/pywot.git
$ cd pywot
$ export PYTHONPATH=$PYTHONPATH:$PWD
$ cd demo
$ cp tide_light_sample.ini tide_light.ini
# the name of the city (use _ for spaces)
city_name=INSERT CITY NAME HERE
# the two letter state code
state_code=INSERT STATE CODE HERE
# the id of the color bulb to control
thing_id=INSERT THING ID HERE
# the api key to access the Things Gateway
things_gateway_auth_key=INSERT THINGS GATEWAY AUTH KEY HERE
# the api key to access Weather Underground data
weather_underground_api_key=INSERT WEATHER UNDERGROUND KEY TOKEN HERE
$ ./tide_light.py --admin.conf=tide_light.ini
$ ./tide_light.py --admin.conf=tide_light.ini --city=crescent_city --state_code=CA
http://www.twobraids.com/2018/06/things-gateway-restful-api-and-tide.html
|
Mozilla Addons Blog: Add-ons at the San Francisco All Hands Meeting |
Last week, more than 1,200 Mozillians from around the globe converged on San Francisco, California, for Mozilla’s biannual All Hands meeting to celebrate recent successes, learn more about products from around the company, and collaborate on projects currently in flight.
For the add-ons team, this meant discussing tooling improvements for extension developers, reviewing upcoming changes to addons.mozilla.org (AMO), sharing what’s in store for the WebExtensions API, and checking in on initiatives that help users discover extensions. Here are some highlights:
During a recent survey, participating extension developers noted two stand-out tools for development: web-ext, a command line tool that can run, lint, package, and sign an extension; and about:debugging, a page where developers can temporarily install their extensions for manual testing. There are improvements coming to both of these tools in the coming months.
In the immediate future, we want to add a feature to web-ext that would let developers submit their extensions to AMO. Our ability to add this feature is currently blocked by how AMO handles extension metadata. Once that issue is resolved, you can expect to see web-ext support a submit command. We also discussed implementing a create command that would generate a standard extension template for developers to start from.
Developers can currently test their extensions manually by installing them through about:debugging. Unfortunately, these installations do not persist once the browser is closed or restarted. Making these installations persistent is on our radar, and now that we are back from the All Hands, we will be looking at developing a plan and finding resources for implementation.
During the next three months, the AMO engineering team will prioritize work around improving user rating and review flows, improving the code review tools for add-on reviewers, and converting dictionaries to WebExtensions.
Engineers will also tackle a project to ensure that users who download Firefox because they want to install a particular extension or theme from AMO are able to successfully complete the installation process. Currently, users who download Firefox from a listing on AMO are not returned to AMO when they start Firefox for the first time, making it hard for them to finish installing the extension they want. By closing this loop, we expect to see an increase in extension and/or theme installations.
Several new and enhanced APIs have landed in Firefox since January, and more are on their way. In the next six months, we anticipate landing WebExtensions APIs for clipboard support, bookmarks and session management (including bookmark tags and further expansions of the theming API).
Additionally, we’ll be working towards supporting visual overlays (like notification bars, floating panels, popups, and toolbars) by the end of the year.
This year, we are focusing on helping Firefox users find and discover great extensions quickly. We have made a few bets on how we can better meet user needs by recommending specific add-ons. In San Francisco, we checked in on the status of projects currently underway:
In May, we started testing recommendations on listing pages for extensions commonly co-installed by other users.
Results so far have shown that people are discovering and installing more relevant extensions from these recommendations than the control group, who only sees generally popular extensions. We will continue to make refinements and fully graduate it into AMO in the second half of the year.
(For our privacy-minded friends: you can learn more about how Firefox uses data to improve its products by reading the Firefox Privacy Notice.)
We want to make users aware of the benefits of customizing their browser soon after installing Firefox. We’re currently testing a few prototypes of a new onboarding flow.
We have more projects to improve extension discovery and user satisfaction on our Trello.
Are you interested in contributing to the add-ons ecosystem? Check out our wiki to see a list of current contribution opportunities.
The post Add-ons at the San Francisco All Hands Meeting appeared first on Mozilla Add-ons Blog.
https://blog.mozilla.org/addons/2018/06/21/add-ons-at-the-san-francisco-all-hands-meeting/
|
Air Mozilla: Reps Weekly Meeting, 21 Jun 2018 |
This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.
|
Air Mozilla: The Joy of Coding - Episode 142 |
mconley livehacks on real Firefox bugs while thinking aloud.
|
Air Mozilla: Weekly SUMO Community Meeting, 20 Jun 2018 |
This is the SUMO weekly call
https://air.mozilla.org/weekly-sumo-community-meeting-20180620/
|
Botond Ballo: Trip Report: C++ Standards Meeting in Rapperswil, June 2018 |
Project | What’s in it? | Status |
C++17 | See list | Published! |
C++20 | See below | On track |
Library Fundamentals TS v2 | source code information capture and various utilities | Published! Parts of it merged into C++17 |
Concepts TS | Constrained templates | Merged into C++20 with some modifications |
Parallelism TS v2 | Task blocks, library vector types and algorithms, and more | Approved for publication! |
Transactional Memory TS | Transaction support | Published! Not headed towards C++20 |
Concurrency TS v1 | future.then() , latches and barriers, atomic smart pointers |
Published! Parts of it merged into C++20, more on the way |
Executors | Abstraction for where/how code runs in a concurrent context | Final design being hashed out. Ship vehicle not decided yet. |
Concurrency TS v2 | See below | Under development. Depends on Executors. |
Networking TS | Sockets library based on Boost.ASIO | Published! |
Ranges TS | Range-based algorithms and views | Published! Headed towards C++20 |
Coroutines TS | Resumable functions, based on Microsoft’s await design |
Published! C++20 merge uncertain |
Modules v1 | A component system to supersede the textual header file inclusion model | Published as a TS |
Modules v2 | Improvements to Modules v1, including a better transition path | Under active development |
Numerics TS | Various numerical facilities | Under active development |
Graphics TS | 2D drawing API | No consensus to move forward |
Reflection TS | Static code reflection mechanisms | Send out for PDTS ballot |
Contracts | Preconditions, postconditions, and assertions | Merged into C++20 |
A few links in this blog post may not resolve until the committee’s post-meeting mailing is published (expected within a few days of June 25, 2018). If you encounter such a link, please check back in a few days.
A couple of weeks ago I attended a meeting of the ISO C++ Standards Committee (also known as WG21) in Rapperswil, Switzerland. This was the second committee meeting in 2018; you can find my reports on preceding meetings here (March 2018, Jacksonville) and here (November 2017, Albuquerque), and earlier ones linked from those. These reports, particularly the Jacksonville one, provide useful context for this post.
At this meeting, the committee was focused full-steam on C++20, including advancing several significant features — such as Ranges, Modules, Coroutines, and Executors — for possible inclusion in C++20, with a secondary focus on in-flight Technical Specifications such as the Parallelism TS v2, and the Reflection TS.
C++20 continues to be under active development. A number of new changes have been voted into its Working Draft at this meeting, which I list here. For a list of changes voted in at previous meetings, see my Jacksonville report.
<=>
and other comparison operators.explicit
constructors, a.k.a. explicit(bool)
.this
via [=]
. This deprecates the arguably misleading semantics of a default by-value capture implicitly capturing the this
pointer (which amounts to capturing the enclosing object by reference). If such capture is desired, it can be expressed explicitly as [=, this]
. (It’s worth noting that C++20 also provides a syntax for capturing the enclosing object by value: [this]
.)-std=c++11
.)__VA_OPT__
preprocessor feature.atomic_ref
shift()
to
erase()
-like algorithmsconstexpr
comparison operators for std::array
constexpr
for swap and related functionsfpos
requirementsexplicit
default constructorsIn addition to the C++ International Standard (IS), the committee publishes Technical Specifications (TS) which can be thought of experimental “feature branches”, where provisional specifications for new language or library features are published and the C++ community is invited to try them out and provide feedback before final standardization.
At this meeting, the committee voted to publish the second version of the Parallelism TS, and to send out the Reflection TS for its PDTS (“Proposed Draft TS”) ballot. Several other TSes remain under development.
The Parallelism TS v2 was sent out for its PDTS ballot at the last meeting. As described in previous reports, this is a process where a draft specification is circulated to national standards bodies, who have an opportunity to provide feedback on it. The committee can then make revisions based on the feedback, prior to final publication.
The results of the PDTS ballot had arrived just in time for the beginning of this meeting, and the relevant subgroups (primarily the Concurrency Study Group) worked diligently during the meeting to go through the comments and address them. This led to the adoption of several changes into the TS working draft:
simd
concat()
and split()
on simd<>
objectsThe working draft, as modified by these changes, was then approved for publication!
The Reflection TS, based on the reflexpr
static reflection proposal, picked up one new feature, static reflection of functions, and was subsequently sent out for its PDTS ballot! I’m quite excited to see efficient progress on this (in my opinion) very important feature.
Meanwhile, the committee has also been planning ahead for the next generation of reflection and metaprogramming facilities for C++, which will be based on value-based constexpr
programming rather than template metaprogramming, allowing users to reap expressiveness and compile-time performance gains. In the list of proposals reviewed by the Evolution Working Group (EWG) below, you’ll see quite a few of them are extensions related to constexpr
; that’s largely motivated by this direction.
The Concurrency TS v2 (no working draft yet), whose notable contents include revamped versions of async()
and future::then()
, among other things, continues to be blocked on Executors. Efforts at this meeting focused on moving Executors forward.
The Library Fundementals TS v3 is now “open for business” (has an initial working draft based on the portions of v2 that have not been merged into the IS yet), but no new proposals have been merged to it yet. I expect that to start happening in the coming meetings, as proposals targeting it progress through the Library groups.
There are (or were, in the case of the Graphics TS) some planned future Technical Specifications that don’t have an official project or working draft at this point:
At the last meeting, the Graphics TS, set to contain 2D graphics primitives with an interface inspired by cairo, ran into some controversy. A number of people started to become convinced that, since this was something that professional graphics programmers / game developers were unlikely to use, the large amount of time that a detailed wording review would require was not a good use of committee time.
As a result of these concerns, an evening session was held at this meeting to decide the future of the proposal. A paper arguing we should stay course was presented, as was an alternative proposal for a much lighter-weight “diet” graphics library. After extensive discussion, however, neither the current proposal nor the alternative had consensus to move forward.
As a result – while nothing is ever set in stone and the committee can always change in mind – the Graphics TS is abandoned for the time being.
(That said, I’ve heard rumours that the folks working on the proposal and its reference implementation plan to continue working on it all the same, just not with standardization as the end goal. Rather, they might continue iterating on the library with the goal of distributing it as a third-party library/package of some sort (possibly tying into the committee’s exploration of improving C++’s package management ecosystem).)
SG 1 (the Concurrency Study Group) achieved design consensus on a unified executors proposal (see the proposal and accompanying design paper) at the last meeting.
At this meeting, another executors proposal was brought forward, and SG 1 has been trying to reconcile it with / absorb it into the unified proposal.
As executors are blocking a number of dependent items, including the Concurrency TS v2 and merging the Networking TS, SG 1 hopes to progress them forward as soon as possible. Some members remain hopeful that it can be merged into C++20 directly, but going with the backup plan of publishing it as a TS is also a possibility (which is why I’m listing it here).
Turning now to Technical Specifications that have already been published, but not yet merged into the IS, the C++ community is eager to see some of these merge into C++20, thereby officially standardizing the features they contain.
The Ranges TS, which modernizes and Conceptifies significant parts of the standard library (the parts related to algorithms and iterators), has been making really good progress towards merging into C++20.
The first part of the TS, containing foundational Concepts that a large spectrum of future library proposals may want to make use of, has just been merged into the C++20 working draft at this meeting. The second part, the range-based algorithms and utilities themselves, is well on its way: the Library Evolution Working Group has finished ironing out how the range-based facilities will integrate with the existing facilities in the standard library, and forwarded the revised merge proposal for wording review.
The Coroutines TS was proposed for merger into C++20 at the last meeting, but ran into pushback from adopters who tried it out and had several concerns with it (which were subsequently responded to, with additional follow-up regarding optimization possibilities).
Said adopters were invited to bring forward a proposal for an alternative / modified design that addressed their concerns, no later than at this meeting, and so they did; their proposal is called Core Coroutines.
Core Coroutines was reviewed by the Evolution Working Group (I summarize the technical discussion below), which encouraged further iteration on this design, but also felt that such iteration should not hold up the proposal to merge the Coroutines TS into C++20. (What’s the point in iterating on one design if another is being merged into the IS draft, you ask? I believe the thinking was that further exploration of the Core Coroutines design could inspire some modifications to the Coroutines TS that could be merged at a later meeting, still before C++20’s publication.)
As a result, the merge of the Coroutines TS came to a plenary vote at the end of the week. However, it did not garner consensus; a significant minority of the committee at large felt that the Core Coroutines design deserved more exploration before enshrining the TS design into the standard. (At least, I assume that was the rationale of those voting against. Regrettably, due to procedural changes, there is very little discussion before plenary votes these days to shed light on why people have the positions they do.)
The window for merging a TS into C++20 remains open for approximately one more meeting. I expect the proponents of the Coroutines TS will try the merge again at the next meeting, while the authors of Core Coroutines will refine their design further. Hopefully, the additional time and refinement will allow us to make a better-informed final decision.
The Networking TS is in a situation where the technical content of the TS itself is in a fairly good shape and ripe for merging into the IS, but its dependency on Executors makes a merger in the C++20 timeframe uncertain.
Ideas have been floated around of coming up with a subset of Executors that would be sufficient for the Networking TS to be based on, and that could get agreement in time for C++20. Multiple proposals on this front are expected at the next meeting.
Modules is one of the most-anticipated new features in C++. While the Modules TS was published fairly recently, and thus merging it into C++20 is a rather ambitious timeline (especially since there are design changes relative to the TS that we know we want to make), there is a fairly widespread desire to get it into C++20 nonetheless.
I described in my last report that there was a potential path forward to accomplishing this, which involved merging a subset of a revised Modules design into C++20, with the rest of the revised design to follow (likely in the form of a Modules TS v2, and a subsequent merge into C++23).
The challenge with this plan is that we haven’t fully worked out the revised design yet, never mind agreed on a subset of it that’s safe for merging into C++20. (By safe I mean forwards-compatible with the complete design, since we don’t want breaking changes to a feature we put into the IS.)
There was extensive discussion of Modules in the Evolution Working Group, which I summarize below. The procedural outcome was that there was no consensus to move forward with the “subset” plan, but we are moving forward with the revised design at full speed, and some remain hopeful that the entire revised design (or perhaps a larger subset) can still be merged into C++20.
The Concepts TS was merged into the C++20 working draft previously, but excluding certain controversial parts (notably, abbreviated function templates (AFTs)).
As AFTs remain quite popular, the committee has been trying to find an alternative design for them that could get consensus for C++20. Several proposals were heard by EWG at the last meeting, and some refined ones at this meeting. I summarize their discussion below, but in brief, while there is general support for two possible approaches, there still isn’t final agreement on one direction.
We are now about 6 years into the committee’s procedural experiment of using Technical Specifications as a vehicle for gathering feedback based on implementation and use experience prior to standardization of significant features. Opinions differ on how successful this experiment has been so far, with some lauding the TS process as leading to higher-quality, better-baked features, while others feel the process has in some cases just added unnecessary delays.
The committee has recently formed a Direction Group, a small group composed of five senior committee members with extensive experience, which advises the Working Group chairs and the Convenor on matters related to priority and direction. One of the topics the Direction Group has been tasked with giving feedback on is the TS process, and there was evening session at this meeting to relay and discuss this advice.
The Direction Group’s main piece of advice was that while the TS process is still appropriate for sufficiently large features, it’s not to be embarked on lightly; in each case, a specific set of topics / questions on which the committee would like feedback should be articulated, and success criteria for a TS “graduating” and being merged into the IS should be clearly specified at the outset.
I’ll now write in a bit more detail about the technical discussions that took place in the Evolution Working Group, the subgroup that I sat in for the duration of the week.
Unless otherwise indicated, proposals discussed here are targeting C++20. I’ve categorized them into the usual “accepted”, “further work encouraged”, and “rejected” categories:
Accepted proposals:
always
checking level was removed, and the source location reported for precondition violations was made implementation-defined (previously, it had to be a source location in the function’s caller).try
/catch
blocks in constexpr
functions. Throwing an exception is still not allowed during constant evaluation, but the try
/catch
construct itself can be present as long as only the non-throwing codepaths as exercised at compile time.constexpr
containers. EWG previously approved basic support for using dynamic allocation during constant evaluation, with the intention of allowing containers like std::vector
to be used in a constexpr
context (which is now happening). This is an extension to that, which allows storage that was dynamically allocated at compile time to survive to runtime, in the form of a static (or automatic) storage duration variable.std::error_code
from being used at compile time.constexpr
functions, spelt constexpr!
, which not only can run at compile time, but have to. This is motivated by several use cases, one of them being value-based reflection, where you need to be able to write functions that manipulate information that only exists at compile-time (like handles to compiler data structures used to implement reflection primitives).std::is_constant_evaluated()
. This allows you to check whether a constexpr
function is being invoked at compile time or at runtime. Again there are numerous use cases for this, but a notable one is related to allowing std::string
to be used in a constexpr
context. Most implementations of std::string
use a “small string optimization” (SSO) where sufficiently small strings are stored inline in the string object rather than in a dynamically allocated block. Unfortunately, SSO cannot be used in a constexpr
context because it requires using reinterpret_cast
(and in any case, the motivation for SSO is runtime performance), so we need a way to make the SSO conditional on the string being created at runtime.namespace foo { namespace bar { namespace baz {
to namespace foo::bar::baz {
, but there is no way to shorten namespace foo { inline namespace bar { namespace baz {
. This proposal allows writing namespace foo::inline bar::baz
. The single-name version, namespace inline foo {
is also valid, and equivalent to inline namespace foo {
.There were also a few that, after being accepted by EWG, were reviewed by CWG and merged into the C++20 working draft the same week, and thus I already mentioned them in the C++20 section above:
__cpp_constexpr
) from being listed in a Standing Document, to being listed in the standard itself. My understanding is that this is the “stamp of approval” Microsoft has been waiting for to implement these macros in MSVC.Proposals for which further work is encouraged:
using a = b;
) so that you can alias not only types, but also other entities like namespaces or functions. EWG was generally favourable to the idea, but felt that aliases for different kinds of entities should use different syntaxes. (Among other considerations, using the same syntax would mean having to reinstate the recently-removed requirement to use typename
in front of a dependent type in an alias declaration.) The author will explore alternative syntaxes for non-type aliases and return with a revised proposal.this
. The type of the implicit object parameter (the “this
” parameter) of a member function can vary in the same ways as the types of other parameters: lvalue vs. rvalue, const
vs. non-const. C++ provides ways to overload member functions to capture this variation (trailing const
, ref-qualifiers), but sometimes it would be more convenient to just template over the type of the this
parameter. This proposal aims to allow that, with a syntax like this:
template
R foo(this Self&& self, /* other parameters */);
const
and ref-qualifiers are now), leading to a syntax more like this:
template
R foo(/* other parameters */) Self&& self
self
in the above example), and not also using this
(as that would lead to confusion in cases where e.g. this
has the base type while self
has a derived type).
constexpr
function parameters. The most ambitious constexpr
-related proposal brought forward at this meeting, this aimed to allow function parameters to be marked as constexpr
, and accordingly act as constant expressions inside the function body (e.g. it would be valid to use the value of one as a non-type template parameter or array bound). It was quickly pointed out that, while the proposal is implementable, it doesn’t fit into the language’s current model of constant evaluation; rather, functions with constexpr
parameters would have to be implemented as templates, with a different instantiation for every combination of parameter values. Since this amounts to being a syntactic shorthand for non-type template parameters, EWG suggested that the proposal be reformulated in those terms.
In the first case, the annotation could take the form of an attribute (e.g. [[lifetimebound]]
). In the second or third case, it would have to be something else, like a context-sensitive keyword (since attributes aren’t supposed to have semantic effects). The proposal authors suggested initially going with the first option in the C++20 timeframe, while leaving the door open for the second or third option later on.
EWG agreed that mitigating lifetime hazards is an important area of focus, and something we’d like to deliver on in the C++20 timeframe. There was some concern about the proposed annotation being too noisy / viral. People asked whether the annotations could be deduced (not if the function is compiled separately, unless we rely on link-time processing), or if we could just lifetime-extend by default (not without causing undue memory pressure and risking resource exhaustion and deadlocks by not releasing expensive resources or locks in time). The authors will investigate the problem space further, including exploring ways to avoid the attribute being viral, and comparing their approach to Rust’s, and report back.
exception_ptr
without even try
ing. This aims to allow getting at the exception inside an exception_ptr
without having to throw
it (which is expensive). As a side effect, it would also allow handling exception_ptr
s in code compiled with -fno-exceptions
. EWG felt the idea had merit, even though performance shouldn’t be the guiding principle (since the slowness of throw
is technically a quality-of-implementation issue, although implementations seem to have agreed to not optimize it).std::hash
for your own type, in your type’s namespace, instead of having to close that namespace, open namespace std
, and then reopen your namespace. EWG liked the idea, but the issue of which names — names in your namespace, names in std
, or both — would be visible without qualification inside the specialization, was contentious.Rejected proposals:
basic_string_view(nullptr)
. This paper argued that since it’s common to represent empty strings as a const char*
with value nullptr
, the constructor of string_view
which takes a const char*
argument should allow a nullptr
value and interpret it as an empty string. Another paper convincingly argued that conflating “a zero-sized string” with “not-a-string” does more harm than good, and this proposal was accordingly rejected.CopyConstructible
, outside of a requires-clause. The authors proposed removing the ambiguity by requiring the keyword requires
to introduce a concept expression, as in requires CopyConstructible
. EWG felt this was too much syntactic clutter, given that concept expressions are expected to be used in places like static_assert
and if constexpr
, and given that the ambiguity is, at this point, hypothetical (pending what hapens to AFTs) and there would be options to resolve it if necessary.EWG had another evening session on Concepts at this meeting, to try to resolve the matter of abbreviated function templates (AFTs).
Recall that the main issue here is that, given an AFT written using the Concepts TS syntax, like void sort(Sortable& s);
, it’s not clear that this is a template (you need to know that Sortable
is a concept, not a type).
The four different proposals in play at the last meeting have been whittled down to two:
void sort(Sortable{}& s);
or void sort(Sortable{S}& s);
(with S
in the second form naming the concrete type deduced for this parameter). The proposal also aims to change the constrained-parameter syntax (with which the same function could be written template void sort(S& s);
) to require braces for type parameters, so that you’d instead write template void sort(S& s);
. (The motivation for this latter change is to make it so that ConceptName C
consistently makes C
a value, whether it be a function parameter or a non-type template parameter, while ConceptName{C]
consistently makes C
a type.)template
keyword to announce that an AFT is a template: template void sort(Sortable& s);
. (This is visually ambiguous with one of the explicit specialization syntaxes, but the compiler can disambiguate based on name lookup, and programmers can use the other explicit specialization syntax to avoid visual confusion.) This proposal leaves the constrained-parameter syntax alone.Both proposals allow a reader to tell at a glance that an AFT is a template and not a regular function. At the same time, each proposal has downsides as well. Bjarne’s approach annotates the whole function rather than individual parameters, so in a function with multiple parameters you still don’t know at a glance which parameters are concepts (and so e.g. in a case of a Foo&&
parameter, you don’t know if it’s an rvalue reference or a forwarding reference). Herb’s proposal messes with the well-loved constrained-parameter syntax.
After an extensive discussion, it turned out that both proposals had enough support to pass, with each retaining a vocal minority of opponents. Neither proposal was progressed at this time, in the hope that some further analysis or convergence can lead to a stronger consensus at the next meeting, but it’s quite clear that folks want something to be done in this space for C++20, and so I’m fairly optimistic we’ll end up getting one of these solutions (or a compromise / variation).
In addition to the evening session on AFTs, EWG looked at a proposal to alter the way name lookup works inside constrained templates. The original motivation for this was to resolve the AFT impasse by making name lookup inside AFTs work more like name lookup inside non-template functions. However, it became apparent that (1) that alone will not resolve the AFT issue, since name lookup is just one of several differences between template and non-template code; but (2) the suggested modification to name lookup rules may be desirable (not just in AFTs but in all constrained templates) anyways. The main idea behind the new rules is that when performing name lookup for a function call that has a constrained type as an argument, only functions that appear in the concept definition should be found; the motivation is to avoid surprising extra results that might creep in through ADL. EWG was supportive of making a change along these lines for C++20, but some of the details still need to be worked out; among them, whether constraints should be propagated through auto
variables and into nested templates for the purpose of applying this rule.
As mentioned above, EWG reviewed a modified Coroutines design called Core Coroutines, that was inspired by various concerns that some early adopters of the Coroutines TS had with its design.
Core Coroutines makes a number of changes to the Coroutines TS design:
[<-]
), to reflect that coroutines can be used for a variety of purposes, not just asynchrony (which is what co_await
suggests).operator [<-]
for your type, with more of the logic going into the definition of that function).EWG recognized the benefits of these modifications, although there were a variety of opinions as to how compelling they are. At the same time, there were also a few concerns with Core Coroutines:
sizeof
of the coroutine object, as that requires the size being known by the compiler’s front-end, while with the Coroutines TS it’s sufficient for the size to be computed during the optimization phase.As mentioned, the procedural outcome of the discussion was to encourage further work on the Core Coroutines, while not blocking the merger of the Coroutines TS into C++20 on such work.
While in the end there was no consensus to merge the Coroutines TS into C++20 at this meeting, there remains fairly strong demand for having coroutines in some form in C++20, and I am therefore hopeful that some sort of joint proposal that combines elements of Core Coroutines into the Coroutines TS will surface at the next meeting.
As of the last meeting, there were two alternative Modules designs before the committee: the recently-published Modules TS, and the alternative proposal from the Clang Modules implementers called Another Take On Modules (“Atom”).
Since the last meeting, the authors of the two proposals have been collaborating to produce a merged proposal that combines elements from both proposals.
The merged proposal accomplishes Atom’s goal of providing a better mechanism for existing codebases to transition to Modules via modularized legacy headers (called legacy header imports in the merged proposal) – basically, existing headers that are not modules, but are treated as-if they were modules by the compiler. It retains the Modules TS mechanism of global module fragments, with some important restrictions, such as only allowing #include
s and other preprocessor directives in the global module fragment.
Other aspects of Atom that are part of the the merged proposal include module partitions (a way of breaking up the interface of a module into multiple files), and some changes to export
and template instantiation semantics.
EWG reviewed the merged proposal favourably, with a strong consensus for putting these changes into a second iteration of the Modules TS. Design guidance was provided on a few aspects, including tweaks to export
behaviour for namespaces, and making export
be “inherited”, such that e.g. if the declaration of a structure is exported, then its definition is too by default. (A follow-up proposal is expected for a syntax to explicitly make a structure definition not exported without having to move it into another module partition.) A proposal to make the lexing rules for the names of legacy header units be different from the existing rules for #include
s failed to gain consensus.
One notable remaining point of contention about the merged proposal is that module
is a hard keyword in it, thereby breaking existing code that uses that word as an identifier. There remains widespread concern about this in multiple user communities, including the graphics community where the name “module” is used in existing published specifications (such as Vulkan). These concerns would be addressed if module
were made a context-sensitive keyword instead. There was a proposal to do so at the last meeting, which failed to gain consensus (I suspect because the author focused on various disambiguation edge cases, which scared some EWG members). I expect a fresh proposal will prompt EWG to reconsider this choice at the next meeting.
As mentioned above, there was also a suggestion to take a subset of the merged proposal and put it directly into C++20. The subset included neither legacy header imports nor global module fragments (in any useful form), thereby not providing any meaningful transition mechanism for existing codebases, but it was hoped that it would still be well-received and useful for new codebases. However, there was no consensus to proceed with this subset, because it would have meant having a new set of semantics different from anything that’s implemented today, and that was deemed to be risky.
It’s important to underscore that not proceeding with the “subset” approach does not necessarily mean the committee has given up on having any form of Modules in C++20 (although the chances of that have probably decreased). There remains some hope that the development of the merged proposal might proceed sufficiently quickly that the entire proposal — or at least a larger subset that includes a transition mechanism like legacy header imports — can make it into C++20.
Finally, EWG briefly heard from the authors of a proposal for modular macros, who basically said they are withdrawing their proposal because they are satisfied with Atom’s facility for selectively exporting macros via #export
directives, which is being treated as a future extension to the merged proposal.
With the continued focus on large proposals that might target C++20 like Modules and Coroutines, EWG has a growing backlog of smaller proposals that haven’t been discussed, in some cases stretching back to two meetings ago (see the the committee mailings for a list). A notable item on the backlog is a proposal by Herb Sutter to bridge the two worlds of C++ users — those who use exceptions and those who not — by extending the exception model in a way that (hopefully) makes it palatable to everyone.
Having sat in EWG all week, I can’t report on technical discussions of library proposals, but I’ll mention where some proposals are in the processing queue.
I’ve already listed the library proposals that passed wording review and were voted into the C++20 working draft above.
The following are among the proposals have passed design review and are undergoing (or awaiting) wording review:
std::vector
constexpr
. (This is the sort of thing that language enhancements to allow dynamic allocation in a constexpr
context were meant to unblock.)constexpr
in std::pointer_traits
std::function
basic_stringbuf
decay_unwrap
and unwrap_reference
std::span
function_ref
: a non-owning reference to a Callabletask
typeThe following proposals are still undergoing design review, and are being treated with priority:
none()
factories for Nullable typesstd::optional
The following proposals are also undergoing design review:
tuple
get()
std::embed
, which provides a mechanism to access program-external resources at compile timepartial_order
comparison algorithmstd::filesystem::path
split()
/join()
for string
and string_view
span
unsigned. This is likely to prompt an evening session at the next meeting to have a more general discussion about the use of signed vs. unsigned types in library interfaces.span
be regular?As usual, there is a fairly long queue of library proposals that haven’t started design review yet. See the committee’s website for a full list of proposals.
(These lists are incomplete; see the post-meeting mailing when it’s published for complete lists.)
I’ve already talked about some of the Concurrency Study Group’s work above, related to the Parallelism TS v2, and Executors.
The group has also reviewed some proposals targeting C++20. These are at various stages of the review pipeline:
Proposals before the Library Evolution Working Group include latches and barriers, C atomics in C++, and a joining thread.
Proposals before the Library Working Group include improvements to atomic_flag, efficient concurrent waiting, and fixing atomic initialization.
Proposls before the Core Working Group include revising the C++ memory model. A proposal to weaken release sequences has been put on hold.
It was a relatively quiet week for SG 7, with the Reflection TS having undergone and passed wording review, and extensions to constexpr
that will unlock the next generation of reflection facilities being handled in EWG. The only major proposal currently on SG 7’s plate is metaclasses, and that did not have an update at this meeting.
That said, SG 7 did meet briefly to discuss two other papers:
constexpr
value-based metaprogramming rather than template metaprogramming. At the same time, SG 7 recognized the desire for having metaprogramming facilities in the standard, and urged proponents on the constexpr
approach to bring forward a library proposal built on that soon.SG 12 met to discuss several topics this week:
malloc()
is used), which aims to standardize existing practice that the current standard wording makes undefined behaviour.
|
Gregory Szorc: Deterministic Firefox Builds |
As of Firefox 60, the build environment for official Firefox Linux builds switched from CentOS to Debian.
As part of the transition, we overhauled how the build environment for Firefox is constructed. We now populate the environment from deterministic package snapshots and are much more stringent about dependencies and operations being deterministic and reproducible. The end result is that the build environment for Firefox is deterministic enough to enable Firefox itself to be built deterministically.
Changing the underlying operating system environment used for builds was a risky change. Differences in the resulting build could result in new bugs or some users not being able to run the official builds. We figured a good way to mitigate that risk was to make the old and new builds as bit-identical as possible. After all, if the environments produce the same bits, then nothing has effectively changed and there should be no new risk for end-users.
Employing the diffoscope tool, we identified areas where Firefox builds weren't deterministic in the same environment and where there was variance across build environments. We iterated on differences and changed systems so variance would no longer occur. By the end of the process, we had bit-identical Firefox builds across environments.
So, as of Firefox 60, Firefox builds on Linux are deterministic in our official build environment!
That being said, the builds we ship to users are using PGO. And an end-to-end build involving PGO is intrinsically not deterministic because it relies on timing data that varies from one run to the next. And we don't yet have continuous automated end-to-end testing that determinism holds. But the underlying infrastructure to support deterministic and reproducible Firefox builds is there and is not going away. I think that's a milestone worth celebrating.
This milestone required the effort of many people, often working indirectly toward it. Debian's reproducible builds effort gave us an operating system that provided deterministic and reproducible guarantees. Switching Firefox CI to Taskcluster enabled us to switch to Debian relatively easily. Many were involved with non-determinism fixes in Firefox over the years. But Mike Hommey drove the transition of the build environment to Debian and he deserves recognition for his individual contribution. Thanks to all these efforts - and especially Mike Hommey's - we can now say Firefox builds deterministically!
The fx-reproducible-build bug tracks ongoing efforts to further improve the reproducibility story of Firefox. (~300 bugs in its dependency tree have already been resolved!)
http://gregoryszorc.com/blog/2018/06/20/deterministic-firefox-builds
|
Air Mozilla: Guidance for H1 Merit and Bonus Award Cycle |
In part 2 of this two-part video, managers can use this Playbook to assess their employees' performance and make recommendations about bonus and merit.
https://air.mozilla.org/guidance-for-h1-merit-and-bonus-award-cycle/
|
Air Mozilla: Manager Playbook_Performance Assessment |
In part 1 of this two-part video, managers can use this Playbook to help assess their employees' performance and make bonus and merit recommendations.
https://air.mozilla.org/manager-playbook-performance-assessment/
|
Bryce Van Dyk: Setting up Arcanist for Mozilla development on Windows |
Mozilla is rolling out Phabricator as part of our tooling. However, at the time of writing I was unable to find a straight forward setup to get the Phabricator tooling playing nice on Windows with MozillaBuild.
Right now there are a couple of separate threads around how to interact with Phabricator on Windows:
However, I have stuff waiting for me on Phabricator that I'd like to interact with now, so let's get a work around in place! I started with the Arcanist windows steps, but have adapted them to a MozillaBuild specific environment.
Arcanist requires PHP. Grab a build from here. The docs for Arcanist indicate the type of build doesn't really matter, but I opted for a thread safe one because that seems like a nice thing to have.
I installed PHP outside of my MozillaBuild directory, but you can put it anywhere. For the sake of example, my install is in C:\Patches\Php\php-7.2.6-Win32-VC15-x64
.
We need to enable the curl extension: in the PHP install dir copy php.ini-development
to php.ini
and uncomment (by removing the ;
) the extension=curl
line.
Finally, enable PHP to find its extension by uncommenting the extension_dir = "ext"
line. The Arcanist instructions suggest setting a fully qualified path, but I found a relative path worked fine.
Create somewhere to store Arcanist and libphutil. Note, these need to be located in the same directory for arc
to work.
$ mkdir somewhere/
$ cd somewhere/
somewhere/ $ git clone https://github.com/phacility/libphutil.git
somewhere/ $ git clone https://github.com/phacility/arcanist.git
For me this is C:\Patches\phacility\
.
Since I want arc
to be usable in MozillaBuild until this work around is no longer required, we're going to modify start up settings. We can do this by changing ...mozilla-build/msys/etc/profile.d
and adding to the PATH
already being constructed. In my case I've added that paths mentioned earlier, but with MSYS syntax: /c/Patches/Php/php-7.2.6-Win32-VC15-x64:/c/Patches/phacility/arcanist/bin:
.
Now arc
should run inside newly started MozillaBuild shells.
We still need credentials in order to use arc
with mozilla-central
. For this to work we need a Phabricator account, see here for that. After that's done, in order to get your credentials run arc install-certificate
, navigate to the page as instructed and copy your API key back to the command line.
There was an issue with the evolve
Mercurial extension the would cuase Unknown Mercurial log field 'instability'!
. This should now be fixed in Arcanist. See this bug for more info.
Finally, I had some issues with arc diff
based on my Mercurial config. Updating my extensions and running a ./mach bootsrap
seemed to help.
Hopefully everything is ready to go at this point. I found working through the Mozilla docs for how to use arc
after setup helpful. If have any comments, please let me know either via email or on IRC.
https://www.brycevandyk.com/setting-up-arcanist-for-mozilla-development-on-windows/
|
Emma Irwin: Call for Feedback! Draft of Goal-Metrics for Diversity & Inclusion in Open Source (CHAOSS) |
In the last few months, Mozilla has invested in collaboration with other open source project leaders and academics who care about improving diversity & inclusion in Open Source through the CHAOSS D&I working group. Contributors so far include:
Alexander Serebrenik (Eindhoven University of Technology) , Akshita Gupta (Outreachy), Amy Marrich (OpenStack), Anita Sarma (Oregon State University), Bhagashree Uday (Fedora), Daniel Izquierdo (Bitergia), Emma Irwin (Mozilla), Georg Link (University of Nebraska at Omaha), Gina Helfrich (NumFOCUS), Nicole Huesman (Intel) and Sean Goggins ((University of Missouri).
Our goals are to first establish a set of peer-validated goal-metrics, for understanding diversity & inclusion in FOSS ; Second, to identify technology, and research methodologies for understanding the success of our interventions in ways that keep ethics of privacy, and consent at center. And finally, that we document this work in ways that communities can reproduce the report for themselves.
For Mozilla this follows the recommendations coming out of our D&I research to create Metrics that Matter, and to work across Open Source with others projects trying to solve the same problems. I am very excited to share our first draft of goal-metrics for your feedback.
Please note that we know these are incomplete, we know there are likely existing resources that can improve, or even disprove some of these — and that is the point of this blog post! Please review and provide feedback either — via a Github issue, pull request, or by reaching out to someone in the working group, or by joining our working group call (next one July 20th, 9am PST) — which you can find the video link here.
You can find one or more of us at the following events as well:
|
Mozilla VR Blog: Introducing A-Terrain - a cartography component for A-Frame |
Have you ever wanted to make a small web app to share your favorite places with your friends? For example your favorite photographs attached to a hike, or just a view of your favorite peak, or your favorite places downtown, or a suggested itinerary for friends visiting?
Right now it is difficult to easily incorporate third party map data into your own projects. Creating 3d games or VR experiences with real world maps requires access to proprietary software or closed data ecosystems. To do it from scratch requires pulling data from multiple sources, such as image servers, and elevation servers. It also requires substantial math expertise. As well, often you may want to stylize the rendering to suit your own specific use cases. You may have a tron like video game aesthetic for your project and yet the building geometry you're forced to work with doesn't allow you to change colors. While there are many map providers, such as Apple, Google Maps and suchlike, and there are many viewers - most of these tools are specialized around showing high fidelity maps that are as true to reality as possible. What's missing is a middle ground - where you can take map data and easily put it in your own projects - creating your own mash ups.
We see A-Terrain as a starting point or demo for how the web can be different. With this component you can build whatever 3D experience you want and use real world data.
We’ve arranged for Cesium ion (see http://cesium.com) to make the data set available for free for people to try out. Currently the dataset includes world elevation, satellite images and 3d buildings for San Francisco.
For example here is a stylized view of San Francisco as viewed from ground level on the Embarcadero:
You can try this example yourself in your browser here (use AWSD or arrow keys to move around):
https://anselm.github.io/aterrain/examples/helloworld/tile.html .
This component can also be used as a quick and dirty globe renderer (although if you're really interested in that specific use case then Cesium itself may be more suitable):
I have added some rudimentary navigation controls using hash arguments on the URL. For example here is a piece of Mt Whitney:
https://anselm.github.io/aterrain/examples/place/index.html#lat=36.57850&lon=-118.29226&elev=1000
The real strength of a tool like this is composability — to be able to mix different components together. For example here is A-Terrain and Mozilla Hubs being used for a collaborative hiking trip planning scenario to the Grand Canyon:
Here is the URL for the above. This will take you to a random room ID - share that room ID with your friends to join the same room:
As another example of lightweight composability I place a tiny duck on the earths surface above Oregon. This is just a few lines of scripting:
This example can be visited here:
https://anselm.github.io/aterrain/examples/helloworld/duck.html
To accomplish all this we leverage A-Frame — a browser based framework that lets users build 3d environments easily. The A-Frame philosophy is to take complicated behaviors and wrap them up html tags. If you can write ordinary HTML you can build 3d environments.
A-Frame is part of a Mozilla initiative to foster the open web —to raise the bar on what people can create on the web. Using A-Frame anybody can make 3d, virtual or augmented reality experiences on the web. These experiences can be shared instantly with anybody else in the world — running in the browser, on mobile phones, tablets and high end head mounted displays such as the Oculus Rift and the HTC Vive. You don’t need to buy a 3d authoring tool, you don’t need to ask somebody else permission if you can publish your app, you don’t publish your apps through an app store, you don’t need a special viewer to view the experience — it just runs — just like any ordinary web page.
I want to mention just a few of the folks who’ve helped bring this to this point — this includes Lars Bergstrom at Mozilla, Patrick Cozzi at Cesium, Shehzan especially (who was tireless in answering my dumb questions about coordinate re-projections), Blair MacIntyre (who had the initial idea) and Joshua Marinacci (who has been suggesting improvements and acting as a sounding board as well as testing this work).
The source code for this project is here:
https://github.com/anselm/aterrain
We’re all especially interested in seeing what kinds of experiences people build, and what directions this goes in. I'm especially interested in seeing AR use cases that combine this component with Augmented Reality frameworks such as recent Mozilla initiatives here : https://www.roadtovr.com/mozilla-launches-ios-app-experiment-webar/ . Please keep us posted on your work!
|
Dave Townsend: Taming Phabricator |
So Mozilla is going all-in on Phabricator and Differential as a code review tool. I have mixed feelings on this, not least because it’s support for patch series is more manual than I’d like. But since this is the choice Mozilla has made I might as well start to get used to it. One of the first things you see when you log into Phabricator is a default view full of information.
It’s a little overwhelming for my tastes. The Recent Activity section in particular is more than I need, it seems to list anything anyone has done with Phabricator recently. Sorry Ted, but I don’t care about that review comment you posted. Likewise the Active Reviews section seems very full when it is barely listing any reviews.
But here’s the good news. Phabricator lets you create your own dashboards to use as your default view. It’s a bit tricky to figure out so here is a quick crash course.
Click on Dashboards on the left menu. Click on Create Dashboard in the top right, make your choices then hit Continue. I recommend starting with an empty Dashboard so you can just add what you want to it. Everything on the next screen can be modified later but you probably want to make your dashboard only visible to you. Once created click “Install Dashboard” at the top right and it will be added to the menu on the left and be the default screen when you load Phabricator.
Now you have to add searches to your dashboard. Go to Differential’s advanced search. Fill out the form to search for what you want. A quick example. Set “Reviewers” to “Current Viewer”, “Statuses” to “Needs Review”, then click Search. You should see any revisions waiting on you to review them. Tinker with the search settings and search all you like. Once you’re happy click “Use Results” and “Add to Dashboard”. Give your search a name and select your dashboard. Now your dashboard will display your search whenever loaded. Add as many searches as you like!
Here is my very simple dashboard that lists anything I have to review, revisions I am currently working on and an archive of closed work:
Like it? I made it public and you can see it and install it to use yourself if you like!
https://www.oxymoronical.com/blog/2018/06/Taming-Phabricator
|
David Humphrey: Building Large Code on Travis CI |
This week I was doing an experiment to see if I could automate a build step in a project I'm working on, which requires binary resources to be included in a web app.
I'm building a custom Linux kernel and bundling it with a root filesystem in order to embed it in the browser. To do this, I'm using a dockerized Buildroot build environment (I'll write about the details of this in a follow-up post). On my various computers, this takes anywhere from 15-25 minutes. Since my buildroot/kernel configs won't change very often, I wondered if I could move this to Travis and automate it away from our workflow?
Travis has no problem using docker, and as long as you can fit your build into the alloted 50 minute build timeout window, it should work. Let's do this!
In the simplest case, doing a build like this would be as simple as:
sudo: required
services:
- docker
...
before_script:
- docker build -t buildroot .
- docker run --rm -v $PWD/build:/build buildroot
...
deploy:
# Deploy built binaries in /build along with other assets
This happily builds my docker buildroot
image, and then starts the build within the container, logging everything as it goes. But once the log gets to 10,000 lines in length, Travis won't produce more output. You can still download the Raw Log as a file, so I wait a bit and then periodically download a snapshot of the log in order to check on the build's progress.
At a certain point the build is terminated: once the log file grows to 4M, Travis assumes that all the size is noise, for example, a command running in an infinite loop, and terminates the build with an error.
It's clear that I need to reduce the output of my build. This time I redirect build output to a log file, and then tell Travis to dump the tail-end of the log file in the case of a failed build. The after_failre
and after_success
build stage hooks are perfect for this.:
before_script:
- docker build -t buildroot . > build.log 2>&1
- docker run --rm -v $PWD/build:/build buildroot >> build.log 2>&1
after_failure:
# dump the last 2000 lines of our build, and hope the error is in that!
- tail --lines=2000 build.log
after_success:
# Log that the build worked, because we all need some good news
- echo "Buildroot build succeeded, binary in ./build"
I'm pretty proud of this until it fails after 10 minutes of building with an error about Travis assuming the lack of log messages (which are all going to my build.log
file) means my build has stalled and should be terminated. Turns out you must produce console output every 10 minutes to keep Travis builds alive.
Not only is this a common problem, Travis has a built-in solution in the form of travis_wait
. Essentially, you can prefix your build command with travis_wait
and it will tolerate there being no output for 20 minutes. Need more than 20, you can optionally pass it the number of minutes to wait before timing out. Let's try 30 minutes:
before_script:
- docker build -t buildroot . > build.log 2>&1
- travis_wait 30 docker run --rm -v $PWD/build:/build buildroot >> build.log 2>&1
This builds perfectly...for 10 minutes. Then it dies with a timeout due to there being no console output. Some more research reveals that travis_wait
doesn't play nicely with processes that fork
or exec
.
Lots of people suggest variations on the same theme: run a command that spins and periodically prints something to stdout
, and have it fork your build process:
before_script:
- docker build -t buildroot . > build.log 2>&1
- while sleep 5m; do echo "=====[ $SECONDS seconds, buildroot still building... ]====="; done &
- time docker run --rm -v $PWD/build:/build buildroot >> build.log 2>&1
# Killing background sleep loop
- kill %1
Here we log something at 5 minute intervals, while the build progresses in the background. When it's done, we kill the while
loop. This works perfectly...until it hits the 50 minute barrier and gets killed by Traivs:
$ docker build -t buildroot . > build.log 2>&1
before_script
$ while sleep 5m; do echo "=====[ $SECONDS seconds, buildroot still building... ]====="; done &
$ time docker run --rm -v $PWD/build:/build buildroot >> build.log 2>&1
=====[ 495 seconds, buildroot still building... ]=====
=====[ 795 seconds, buildroot still building... ]=====
=====[ 1095 seconds, buildroot still building... ]=====
=====[ 1395 seconds, buildroot still building... ]=====
=====[ 1695 seconds, buildroot still building... ]=====
=====[ 1995 seconds, buildroot still building... ]=====
=====[ 2295 seconds, buildroot still building... ]=====
=====[ 2595 seconds, buildroot still building... ]=====
=====[ 2895 seconds, buildroot still building... ]=====
The job exceeded the maximum time limit for jobs, and has been terminated.
The build took over 48 minutes on the Travis builder, and combined with the time I'd already spent cloning, installing, etc. there isn't enough time to do what I'd hoped.
Part of me wonders whether I could hack something together that uses successive builds, Travis caches and move the build artifacts out of docker, such that I can do incremental builds and leverage ccache
and the like. I'm sure someone has done it, and it's in a .travis.yml
file in GitHub somewhere already. I leave this as an experiment for the reader.
I've got nothing but love for Travis and the incredible free service they offer open source projects. Every time I concoct some new use case, I find that they've added it or supported it all along. The Travis docs are incredible, and well worth your time if you want to push the service in interesting directions.
In this case I've hit a wall and will go another way. But I learned a bunch and in case it will help someone else, I leave it here for your CI needs.
|