-Поиск по дневнику

Поиск сообщений в rss_planet_mozilla

 -Подписка по e-mail

 

 -Постоянные читатели

 -Статистика

Статистика LiveInternet.ru: показано количество хитов и посетителей
Создан: 19.06.2007
Записей:
Комментариев:
Написано: 7

Planet Mozilla





Planet Mozilla - https://planet.mozilla.org/


Добавить любой RSS - источник (включая журнал LiveJournal) в свою ленту друзей вы можете на странице синдикации.

Исходная информация - http://planet.mozilla.org/.
Данный дневник сформирован из открытого RSS-источника по адресу http://planet.mozilla.org/rss20.xml, и дополняется в соответствии с дополнением данного источника. Он может не соответствовать содержимому оригинальной страницы. Трансляция создана автоматически по запросу читателей этой RSS ленты.
По всем вопросам о работе данного сервиса обращаться со страницы контактной информации.

[Обновить трансляцию]

Joel Maher: Adventures in Task Cluster – Running tests locally

Понедельник, 09 Ноября 2015 г. 23:49 + в цитатник

There is a lot of promise around Taskcluster (the replacement for BuildBot in our CI system at Mozilla) to be the best thing since sliced bread.  One of the deliverables on the Engineering Productivity team this quarter is to stand up the Linux debug tests on Taskcluster in parallel to running them normally via Buildbot.  Of course next quarter it would be logical to turn off the BuildBot tests and run tests via Taskcluster.

This post will outline some of the things I did to run the tests locally.  What is neat is that we run the taskcluster jobs inside a Docker image (yes this is Linux only), and we can download the exact OS container and configuration that runs the tests.

I started out with a try server push which generated some data and a lot of failed tests.  Sadly I found that the treeherder integration was not really there for results.  We have a fancy popup in treeherder when you click on a job, but for taskcluster jobs, all you need is to find the link to inspect task.  When you inspect a task, it takes you to a task cluster specific page that has information about the task.  In fact you can watch a test run live (at least from the log output point of view).  In this case, my test job is completed and I want to see the errors in the log, so I can click on the link for live.log and search away.  The other piece of critical information is the ‘Task‘ tab at the top of the inspect task page.  Here you can see the details about the docker image used, what binaries and other files were used, and the golden nugget at the bottom of the page, the “Run Locally” script!  You can cut and paste this script into a bash shell and theoretically reproduce the exact same failures!

As you can imagine this is exactly what I did and it didn’t work!  Luckily in the #taskcluster channel, there were a lot of folks to help me get going.  The problem I had was I didn’t have a v4l2loopback device available.  This is interesting because we need this in many of our unittests and it means that our host operating system running docker needs to provide video/audio devices for the docker container to use.  Now is time to hack this up a bit, let me start:

first lets pull down the docker image used (from the run locally script):

docker pull 'taskcluster/desktop-test:0.4.4'

next lets prepare my local host machine to run by installing/setting up v4l2loopback:

sudo apt-get install v4l2loopback-dkms

sudo modprobe v4l2loopback devices=2

Now we can try to run docker again, this time adding the –device command:

docker run -ti \
  --name "${NAME}" \
  --device=/dev/video1:/dev/video1 \
  -e MOZILLA_BUILD_URL='https://queue.taskcluster.net/v1/task/c7FbSCQ9T3mE9ieiFpsdWA/artifacts/public/build/target.tar.bz2' \
  -e MOZHARNESS_SCRIPT='mozharness/scripts/desktop_unittest.py' \
  -e MOZHARNESS_URL='https://queue.taskcluster.net/v1/task/c7FbSCQ9T3mE9ieiFpsdWA/artifacts/public/build/mozharness.zip' \
  -e GECKO_HEAD_REPOSITORY='https://hg.mozilla.org/try/' \
  -e MOZHARNESS_CONFIG='mozharness/configs/unittests/linux_unittest.py mozharness/configs/remove_executables.py
' \
  -e GECKO_HEAD_REV='5e76c816870fdfd46701fd22eccb70258dfb3b0c' \
  taskcluster/desktop-test:0.4.4

Now when I run the test command, I don’t get v4l2loopback failures!

bash /home/worker/bin/test.sh --no-read-buildbot-config '--installer-url=https://queue.taskcluster.net/v1/task/c7FbSCQ9T3mE9ieiFpsdWA/artifacts/public/build/target.tar.bz2' '--test-packages-url=https://queue.taskcluster.net/v1/task/c7FbSCQ9T3mE9ieiFpsdWA/artifacts/public/build/test_packages.json' '--download-symbols=ondemand' '--mochitest-suite=browser-chrome-chunked' '--total-chunk=7' '--this-chunk=1'

In fact, I get the same failures as I did when the job originally ran :)  This is great, except for the fact that I don’t have an easy way to run the test by itself, debug, or watch the screen- let me go into a few details on that.

Given a failure in browser/components/search/test/browser_searchbar_keyboard_navigation.js, how do we get more information on that?  Locally I would do:

./mach test browser/components/search/test/browser_searchbar_keyboard_navigation.js

Then at least see if anything looks odd in the console, on the screen, etc.  I might look at the test and see where we are failing at to give me more clues.  How do I do this in a docker container?  The command above to run the tests, calls test.sh, which then calls test-linux.sh as the user ‘worker’ (not as user root).  This is important that we use the ‘worker’ user as the pactl program to find audio devices will fail as root.  Now what happens is we setup the box for testing, including running pulseaudio, Xfvb, compiz (after bug 1223123), and bootstraps mozharness.  Finally we call the mozharness script to run the job we care about, in this case it is ‘mochitest-browser-chrome-chunked’, chunk 1.  It is important to follow these details because mozharness downloads all python packages, tools, firefox binaries, other binaries, test harnesses, and tests.  Then we create a python virtualenv to setup the python environment to run the tests while putting all the files and unpacking them in the proper places.  Now mozharness can call the test harness (python run_tests.py –browser-chrome …)  Given this overview of what happens, it seems as though we should be able to run:

test.sh –test-path browser/components/search/test

Why this doesn’t work is that mozharness has no method for passing in a directory or single test, let along doing other simple things that |./mach test| allows.  In fact, in order to run this single test, we need to:

  • download Firefox binary, tools, and harnesses
  • unpack them (in all the right places)
  • setup the virtual env and install needed dependencies
  • then run the mochitest harness with the dirty dozen (just too many commands to memorize)

Of course most of this is scripted, how can we take advantage of our scripts to set things up for us?  What I did was hack the test-linux.sh locally to not run mozharness and instead echo the command.  Likewise with the mozharness script to echo the test harness call instead of calling it.  Here is the commands I ended up using:

  • bash /home/worker/bin/test.sh --no-read-buildbot-config '--installer-url=https://queue.taskcluster.net/v1/task/c7FbSCQ9T3mE9ieiFpsdWA/artifacts/public/build/target.tar.bz2' '--test-packages-url=https://queue.taskcluster.net/v1/task/c7FbSCQ9T3mE9ieiFpsdWA/artifacts/public/build/test_packages.json' '--download-symbols=ondemand' '--mochitest-suite=browser-chrome-chunked' '--total-chunk=7' --this-chunk=1
  • #now that it failed, we can do:
  • cd workspace/build
  • . venv/bin/activate
  • cd ../build/tests/mochitest
  • python runtests.py –app ../../application/firefox/firefox –utility-path ../bin –extra-profile-file ../bin/plugins –certificate-path ../certs –browser-chrome browser/browser/components/search/test/
  • # NOTE: you might not want –browser-chrome or the specific directory, but you can adjust the parameters used

This is how I was able to run a single directory, and then a single test.  Unfortunately that just proved that I could hack around the test case a bit and look at the output.  In docker there is no simple way to view the screen.   To solve this I had to install x11vnc:

apt-get install x11vnc

Assuming the Xvfb server is running, you can then do:

x11vnc &

This allows you to connect with vnc to the docker container!  The problem is you need the ipaddress.  I then need to get the ip address from the host by doing:

docker ps #find the container id (cid) from the list

docker inspect | grep IPAddress

for me this is 172.17.0.64 and now from my host I can do:

xtightvncviewer 172.17.0.64

This is great as I can now see what is going on with the machine while the test is running!

This is it for now.  I suspect in the future we will make this simpler by doing:

  • allowing mozharness (and test.sh/test-linux.sh scripts) to take a directory instead of some args
  • create a simple bootstrap script that allows for running ./mach style commands and installing tools like x11vnc.
  • figuring out how to run a local objdir in the docker container (I tried mapping the objdir, but had GLIBC issues based on the container being based on Ubuntu 12.04)

Stay tuned for my next post on how to update your own custom TaskCluster image- yes it is possible if you are patient.


https://elvis314.wordpress.com/2015/11/09/adventures-in-task-cluster-running-tests-locally/


Air Mozilla: Mozilla Weekly Project Meeting, 09 Nov 2015

Понедельник, 09 Ноября 2015 г. 22:00 + в цитатник

The Servo Blog: This Week In Servo 41

Понедельник, 09 Ноября 2015 г. 19:02 + в цитатник

In the last week, we landed 129 PRs in the Servo organization’s repositories!

James Graham, a long-time Servo contributor who has been one of the main architects of our testing strategy, now has reviewer privileges. No good deed goes unpunished!

Notable additions

  • Patrick Walton reduced the number of spurious reflows and compositor events
  • Alan Jeffrey got us a huge SpiderMonkey speed boost by using nativeRegExp
  • Martin Robinson’s layerization work has allowed him to remove the incredibly-complicated layers_needed_for_descendants handling!
  • Bobby Holley continued his work on fixing performance and correctness of restyling
  • Lars added CCACHE support and turned it on for our SpiderMonkey build, shaving a couple of minutes off the CI builder times
  • Manish made the CI system verify that the Cargo.lock did not change during the build, a common source of build woes
  • Matt Brubeck and others have been working on cleaning the libcpocalypse
  • Lars changed how the Android build works, so that now we can have a custom icon, Java code for handling intents, and debug

New Contributors

Meetings

At last week’s meeting, we discussed review carry-over, test coverage, the 2016 roadmap, rebase/autosquash for the autolander, the overwhelming PR queue, debug logging, and the CSSWG reftests.

There was also an Oxidation meeting, about the support for landing Rust/Servo components in Gecko. Though it mainly covers the needs of larger systems projects, some of the proposed Cargo features (like flat-source-tree) might also be interesting for Servo.

http://blog.servo.org/2015/11/09/twis-41/


Chris Finke: Introducing Reenact: an app for reenacting photos

Понедельник, 09 Ноября 2015 г. 19:00 + в цитатник

Here’s an idea that I’ve been thinking about for a long time: a camera app for your phone that helps you reenact old photos, like those seen in Ze Frank’s “Young Me Now Me” project. For example, this picture that my wife took with her brother, sister, and childhood friend:

IMG_8943

Reenacting photographs from your youth, taking pregnancy belly progression pictures, saving a daily selfie to show off your beard growth: all of these are situations where you want to match angles and positions with an old photo. A specialized camera app could be of considerable assistance, so I’ve developed one for Firefox OS. It’s called Reenact.

The app’s opening screen is simply a launchpad for choosing your original photo.

intro

The photo picker in this case is handled by any apps that have registered themselves as able to provide a photo, so these screens come from whichever app the user chooses to use for browsing their photos.

pick

gallery

The camera screen of the app begins by showing the original photo at full opacity.

capture-init

The photo then will continually fade out and back in, allowing you to match up your current pose to the old photo.

capture

Take your shot and then compare the two photos before saving. The thumbs-up icon saves the shot, or you can go back and try again.

confirm

Reenact can either save your new photo as its own file or create a side-by-side composite of the original and new photos.

save-type

And finally, you get a choice to either share this photo or reenact another shot.

share

Voila!

youngmeyoungson

If you’re running Firefox OS 2.5 or later, you can install Reenact from the Firefox OS Marketplace, and the source is available on GitHub. I used Firefox OS as a proving ground for the concept, but now that I’ve seen that the idea works, I’ll be investigating writing Android and iOS versions as well.

What do you think? Let me know in the comments.

http://www.chrisfinke.com/2015/11/09/reenact-camera-app/


Botond Ballo: Trip Report: C++ Standards Meeting in Kona, October 2015

Понедельник, 09 Ноября 2015 г. 18:00 + в цитатник

Summary / TL;DR

Project What’s in it? Status
C++14 C++14 Published!
C++17 Very much in flux. Significant language features under consideration include default comparisons, operator., a unified function call syntax, coroutines, and concepts. On track for 2017
Filesystems TS Standard filesystem interface Published!
Library Fundamentals TS I optional, any, string_view and more Published!
Library Fundamentals TS II source code information capture and various utilities Voted out for balloting by national standards bodies
Concepts (“Lite”) TS Constrained templates Publication imminent
Parallelism TS I Parallel versions of STL algorithms Published!
Parallelism TS II TBD. Exploring task blocks, progress guarantees, SIMD. Under active development
Transactional Memory TS Transaction support Published!
Concurrency TS I improvements to future, latches and barriers, atomic smart pointers Voted out for publication!
Concurrency TS II TBD. Exploring executors, synchronic types, atomic views, concurrent data structures Under active development
Networking TS Sockets library based on Boost.ASIO Design review completed; wording review of the spec in progress
Ranges TS Range-based algorithms and views Design review completed; wording review of the spec in progress
Numerics TS Various numerical facilities Beginning to take shape
Array Extensions TS Stack arrays whose size is not known at compile time Direction given at last meeting; waiting for proposals
Reflection Code introspection and (later) reification mechanisms Still in the design stage, no ETA
Graphics 2D drawing API Waiting on proposal author to produce updated standard wording
Modules A component system to supersede the textual header file inclusion model Microsoft and Clang continuing to iterate on their implementations and converge on a design. The feature will target a TS, not C++17.
Coroutines Resumable functions At least two competing designs. One of them may make C++17.
Contracts Preconditions, postconditions, etc. In early design stage

Introduction

Last week I attended a meeting of the ISO C++ Standards Committee in Kona, Hawaii. This was the second committee meeting in 2015; you can find my reports on the past few meetings here (June 2014, Rapperswil), here (November 2014, Urbana-Champaign), and here (May 2015, Lenexa). These reports, particularly the Lenexa one, provide useful context for this post.

The focus of this meeting was primarily C++17. There are many ambitious features underway for standardization, and the time has come to start deciding what which of them will make C++17 and which of them won’t. The ones that won’t will target a future standard, or a Technical Specification (which can eventually also be merged into a future standard). In addition, there are a number of existing Technical Specifications in various stages of completion.

C++17

After C++11, the committee adopted a release train model (much like Firefox’s) for new revisions of the C++ International Standard. Instead of targetting specific features for the next revision of the standard, and waiting until they are all ready before publishing the revision (the way it was done in C++11, which ended up being published 13 years after the previous major revision, C++98), new revisions are released on a three-year train (thus C++14, C++17, C++20, etc.), and a feature is either ready in time to make a particular train, or not (in which case, too bad; it can ride the next train).

As such, there isn’t a list of features planned for C++17 per se. Rather, there are many features in the works, and some will be ready to ride the C++17 train while others will not. More specifically, the committee plans to release the first draft of C++17 for balloting by national standards bodies, called the Committee Draft or CD, at the end of the June 2016 meeting in Oulu, Finland. (This is the timeline required to achieve a publication date in 2017.) This effectively means that for a feature to be in C++17, it must be voted in at that meeting at the latest.

Features already in C++17 coming into the meeting

As a recap, here are the features that have already been voted into C++17 at previous meetings:

I’d classify all of the above as “minor” except for folding expressions, which significantly increase the expressiveness of variadic template code.

Features voted into C++17 at this meeting

Here are the features that have been voted into C++17 at this meeting:

By and large, this is also minor stuff.

Features on the horizon for C++17

Finally, the most interesting bunch: features that haven’t been voted into C++17 yet, but that people are working hard to try and get into C++17.

I’d like to stress that, while obviously the committee tries to standardize features as expeditiously as is reasonable, there is a high quality bar that has to be met for standardization, and a feature isn’t in C++17 until it’s voted in. The features I mention here all have a chance of making C++17 (some better than others), but there are no guarantees.

Concepts

The Concepts Technical Specification (informally called “Concepts Lite”) has been voted for publication in July of this year, and the release of the published TS by ISO is imminent.

The idea behind Technical Specifications is that they give implementers and users a chance to gain experience with a feature, without tying the committee’s hands by standardizing the feature which makes it very difficult to change it in non-backwards-compatible ways. The eventual fate envisioned for a TS is being merged into a future standard, possibly with modifications motivated by feedback from implementers and users.

The question of whether the Concepts TS should be merged into C++17 thus naturally came up. This was certainly the original plan in 2012, when the decision was made to pursue Concepts Lite as a TS aimed for publication in 2014; people then envisioned the TS as being ripe for merger by C++17. Today, with the publication of the TS having slipped to 2015, and the effective deadline for getting changes into C++17 being mid-2016, the timeline looks a bit tighter.

The Concepts TS currently has one complete implementation (modulo bugs), in GCC trunk, which will make it into a release (GCC 6) next year. This, currently experimental, implementation has already been used to gain experience with the feature. Most notably, the Ranges Technical Specification, which revamps significant parts of the C++ Standard Library using Concepts, has an implementation that uses Concepts (in contrast to the previous implementation which emulated Concepts in C++11) and compiles with GCC trunk.

Based on this and other experience, several committee members have argued that Concepts is ripe for standardization in C++17. Others have expressed various concerns about the current design, and argued on this basis that Concepts is not ready for C++17:

  • There is some concern that the current design gives programmers too many syntactic ways of expressing the same thing, and that some of these are redundant. There is no specific proposal for removal one or more of them, but some feel that additional use experience may give insight into what could be removed.
  • Some have argued that Concepts as a language feature, and a revamped standard library that takes advantage of Concepts, should be standardized simultaneously, to demonstrate confidence that the language feature is able to meet the needs of complex, demanding use cases such as those in standard library. The latter is being worked on in the form of the Ranges TS, but that’s quite unlikely to make C++17, so some argue the language feature shouldn’t, either.
  • Some have expressed concern that we don’t yet have a good picture of how well the current Concepts design lends itself to separate checking of template definitions, and that it’s premature to standardize the design until we do.

This last point deserves some elaboration.

When you write a reusable piece of code, such as a function or class, you’re defining an interface between the users of the code, and its implementation. When the function or class is a template, the interface includes requirements on the types (or values) of the template parameters, which are checked at compile time.

Prior to Concepts, these requirements were implicit; you could document the requirements, and you could approximate making them explicit in code by using certain techniques like enable_if, but there was no first-class language support for making them explicit in code. As a result, violations of these requirements – either on the user side by passing template arguments that don’t meet the requirements, or on the implementation side by using the template arguments in ways that go beyond the requirements – would only be caught at instantiation time, that is, when the template is instantiated with concrete arguments for a particular use site. Compiler errors resulting from such violations typically come with long instantiation backtraces, and are notoriously difficult to understand.

Concepts Lite allows us to express these requirements explicitly in code, and to catch violations on the user side “early”, by checking the concrete template arguments passed at a use site against the requirements, without having to look at the implementation of the template. The resulting compiler errors are much easier to understand, and do not contain instantiation backtraces.

However, with the current Concepts Lite design, violations of the requirements on the implementation side – that is, using the template arguments in ways that go beyond the specified requirements – continue to be caught only at instantiation time, and produce hard-to-understand errors with long backtraces. A complete Concepts design, such as the one originally proposed for C++11, includes checking the body of a template against the specified requirements independently of any particular instantiation, to catch implementation-side errors “early” as well; this is referred to as separate checking of template definitions.

Concepts Lite doesn’t currently provide for separate checking of template definitions. Claims have been made that it can be extended to support this, but as this is a difficult-to-implement feature, some would like to see stronger evidence for this (such as a proof-of-concept implementation, or a detailed description of how one would implement it) prior to standardizing Concepts Lite and thus locking us into the current design.

No decision about merging Concepts Lite into C++17 has been made at this meeting; I expect that the topic will continue to be discussed over the next two meetings, with a decision made no later than the June 2016 meeting.

Other published Technical Specifications

The first Parallelism TS, which contains parallel versions of standard algorithms, and was published earlier this year, will be proposed for merger into C++17.

The Filesystem TS, the first Library Fundamentals TS, and the Transactional Memory TS have also been published earlier this year, and could plausibly be proposed for merger into C++17, although I haven’t heard specific talk of such proposals yet.

TS’es that are still in earlier stages of the publication pipeline, such as the Concurrency TS, will almost certainly not make C++17.

Coroutines

As I’ve described in previous posts, there are three competing coroutines proposals in front of the committee:

  • A proposal for stackless coroutines, also known as resumable functions. (This is sometimes called “the await proposal” after one of the keywords it introduces.
  • A proposal for stackful coroutines.
  • A hybrid proposal called resumable expressions that tries to achieve some of the appealing characteristics of both stackful and stackless coroutines.

I talk more about the tradeoffs between stackful and stackless coroutines in my Urbana report, and describe the hybrid approach in more detail in my Lenexa report.

In terms of standardization, at the Urbana meeting the consensus was that stackful and stackless coroutines should advance as independent proposals in the form of a Technical Specification, with a possible view of unifying them in the future. Developments since then can be summed up as follows:

  • The stackless / resumable functions / await proposal has advanced to a stage where it is fully fleshed out, has standard wording, and the Core Working Group has begun reviewing the standard wording.
  • Purely stackful coroutines can be used today as a library-only feature (see e.g. Boost.Coroutine); as such, there is less of a pressing need to standardize them than designs that require language changes.
  • Attempts to achieve a unified proposal are still very much in the design stage. The most recent development on this front is that the author of the “resumable expressions” proposal, Chris Kohlhoff, has decided to abandon the syntax of his proposal, and instead join forces with the authors of the stackful coroutines proposal to come up with an attempt at a unified proposal where the syntax would similar to the one in the stackful proposal, but there would be provisions for the compiler to transform coroutines into a stackless form where possible as an optimization.

Given this state of affairs, and particularly the advanced stage of the await proposal, the question of whether it should be standardized in C++17, rather than a TS, came up. A poll was held on this topic in the Evolution Working Group, with the options being (1) standardizing the await proposal in C++17, and (2) having a TS with both proposals as originally planned. There wasn’t a strong consensus favouring either of these choices over the other; opinion was mostly divided between people who felt we should have some form of coroutines in C++17, and people who felt there was still room for iteration on and convergence between the proposals, making a TS the more appropriate vehicle.

For now, the Core Working Group will continue reviewing the wording of the await proposal; the ship vehicle will presumably be decided by a vote of the full committee at a future meeting.

Contracts

Contracts are a way to express preconditions, postconditions, and other runtime requirements in code, with a view toward opening up use cases such as:

  • Optionally checking the requirements at runtime, and handling their violation in some way.
  • Exposing the requirements to tools such as an analyzer that might attempt to check some of them statically
  • Exposing the requirements to optimizers that might make assumptions (such as assuming that the requirements at met) when optimizing

Initial proposals in the area tended to cater to one or more of these use cases to the detriment of others; guidance from the committee was to aim for a unified proposal. While no such unified proposal has been written yet, the authors of the original proposal and other interested parties have been hard at work trying to solve some of the technical problems involved. I give some details further down.

Given the relatively early stage of these efforts, it’s in my opinion unlikely that they will result in a proposal that makes it into C++17. However, it’s not out of the question, if a written proposal is produced for the next meeting, and its design and wording reviews go sufficiently smoothly.

Operator Dot

Overloading operator dot (the member access operator) allows new forms of interface composition that weren’t possible before.

A proposal for doing so was approved by EWG at the previous meeting; it’s currently pending wording review by CWG.

Another proposal that brings even more expressive power was looked at this meeting; it was sent to the Reflection Study Group as the abilities it unlocks effectively constitute a form of reflection.

The original proposal is slated to come up for a vote to go into C++17 once it passes wording review; however, some object to it on the basis that reflection facilities (such as those that might be produced by iterating on the second proposal) would supersede it. Therefore, I would classify its fate in C++17 as uncertain at this point.

Other Language Features

Beyond the big-ticket items mentioned above, numerous smaller language features may be included in C++17. For a list, please refer to the “Evolution Working Group” section below. Proposals approved by EWG at this meeting are fairly likely to make C++17, as they only need to go through wording review by the Core Working Group before being voted into C++17. Proposals deemed by EWG to need further work face a bit of a higher hurdle, as an updated proposal would need to go through both design and wording reviews before being voted into C++17.

What about Modules?

Modules are possibly the single hottest feature on the horizon for C++ right now. Everyone wants them, and everyone agrees that they will solve very significant problems ranging from code organization to build times. They are also a very challenging feature to specify and implement.

There are two work-in-progress Modules implementations: one in Microsoft’s compiler, and one in Clang, developed primarily by Google. The two implementations take slightly different approaches, and the implementers have been working hard to try to converge their designs.

The Evolution Working Group held a lengthy design discussion about Modules, which culminated in a poll about which of two possible ship vehicles is more appropriate: C++17, or a Technical Specification. A Technical Specification had a stronger consensus, and this is what is currently being pursued. This means Modules are not currently slated for inclusion in C++17. It’s not inconceivable for this to change, if the implementors make very significant progress before the next meeting and convince the committee to change its mind; in my opinion, that’s unlikely to happen.

I talk about the technical issues surrounding Modules in more detail below.

What about ?

Major language features that I haven’t mentioned above, such as reflection, aren’t even being considered for inclusion for C++17.

Evolution Working Group

As usual, I spent practically all of my time in the Evolution Working Group, which spends its time evaluating and reviewing the design of proposed language features.

EWG categorizes incoming proposals into three rough categories:

  • Approved. The proposal is approved without design changes. They are sent on to CWG, which revises them at the wording level, and then puts them in front of the committee at large to be voted into whatever IS or TS they are targeting.
  • Further Work. The proposal’s direction is promising, but it is either not fleshed out well enough, or there are specific concerns with one or more design points. The author is encouraged to come back with a modified proposal that is more fleshed out and/or addresses the stated concerns.
  • Rejected. The proposal is unlikely to be accepted even with design changes.

Accepted proposals:

  • Inline variables allow declaring namespace-scope variables and and static data members as inline, in which case the declaration counts as a definition. (The declaration must then include any initializer.) This is analogous to inline functions, and spares the programmer from having to provide an out-of-line definition in a single translation unit (which can force an otherwise header-only library to no longer be header-only, among other annoyances). The proposal was accepted, with two notable changes. First, to reduce verbosity, constexpr will imply inline. Second, namespace-scope variables will only be allowed to be inline if they are also const. The motivation for the second change is to discourage proliferation of mutable global state; it passed despite objections from some who thought it was complicating the rules with little benefit.
  • A proposal to allow lambdas to appear in a constexpr context. These have good use cases, and there isn’t any compelling motivation to disallow them; the proposal had near unanimous support.
  • Removing dynamic exception specifications, i.e. constructs of the form throw() at the end of a function declaration. These have been deprecated since C++11 introduced noexcept. throw() is kept as a (deprecated) synonym for noexcept(true).
  • An extension to aggregate initialization that allows aggregate initialization of types with base classes. The base classes are treated as subobjects, in order of declaration, preceding the class’s own members. Not allowed for classes with virtual bases.
  • constexpr_if, which was called static_if in previous iterations of the proposal. It’s like an if statement, but its condition is evaluated at compile time, and if it appears in a template, the branch not taken is not instantiated. This is neat because it allows code like constexpr_if (/* T is refcounted */) { /* do something */ } constexpr_else { /* do something else */ }; currently, things like this need to be accomplished more verbosely via specialization.
  • Guaranteeing copy elision in certain contexts. Copy elision refers to the compiler eliding (i.e. not performing) a copy or move of an object in some situations. It differs from a pure optimization in that the compiler is allowed to do it even if the elided copy or move constructor has side effects. Every major compiler does this, but it’s not mandatory, and as a result, the language requires the type whose copy or move is elided to still be copyable or movable. This precludes some useful patterns, such as writing a factory function for a non-copyable, non-movable type. This proposal rectifies the problem by requiring that copy elision be performed in certain contexts (specifically, when a temporary object is used to initialize another object; this happens when returning a temporary from a function, initializing a function parameter with a temporary, and throwing a temporary as an exception), and removing the requirement that types which are only notionally copied or moved in those circumstances, be copyable or movable.
  • A proposal to allow an empty enumeration (that is, one with no enumerators) with a fixed underlying type to be constructed from any integer in the range of its underlying type using the EnumName{42} syntax. This was already allowed using the EnumName(42) syntax, but it was considered a narrowing conversion which is not allowed with the {} syntax. This allows using an enumeration as an opaque/strong typedef for an integer type more effectively. It passed despite objections that a full opaque typedefs proposal would make using enums for this purpose unnecessary.
  • A proposal to specify the order of evaluation of operands for all expressions. This is a breaking change, but one people agree we need to make because not sepcifying the order of evaluation leads to a lot of subtle bugs.
  • Unified function call syntax, in one direction only: f(x, y) can resolve to x.f(y) if regular name lookup finds no results for f. This is #3 out of the six design alternatives presented in the original proposal. It satisfies the proposal’s primary use case of making it easier to write generic code using Concepts, by allowing concepts to to require the non-member syntax to work (and have template implementations use the non-member syntax), while a type can be made to model a concept using either member or non-member functions. The other direction (x.f(y) resolving to f(x, y) if member name lookup finds no results for f) was excluded because it was too controversial, as it enabled writing rather brittle code that’s susceptible to “action at a distance”.
  • A proposal to disallow unary folds of some operators over an empty parameter pack was approved by EWG, with the modification that it should be disallowed for all operators. However, the proposal failed to achieve consensus at the plenary meeting, and will not be moving forward at this time.
  • Several modifications to the await proposal (a.k.a. resumable functions / stackless coroutines). Most notably, EWG settled on the keyword choices co_await, co_yield, and co_return; the proposed alternative of “soft keywords” that tried to allow using await and yield without making them keywords, was rejected on the basis that “the difficulty of adding keywords to C++ is a feature”. The various modifications listed in this paper and this one were also accepted.

(I didn’t list features which also passed CWG review and were voted into C++17 at this meeting; those are listed in the the C++17 section above.)

Proposals for which further work is encouraged:

  • Allowing lambdas declared at class scope to capture *this by value. Currently, capturing this captures the containing object by reference; in situations where capture by value is desired (for example, because the lambda can outlive the object), a temporary copy of the object has to be made and then captured, which is easy to forget to do. The paper proposes introducing two new forms in the capture list: *this, to mean “capture the containing object by value”, and *, to mean “capture all referenced variables including the containing object by value”. EWG advised going forward with the first one only (* looks like “capture by pointer”, and we don’t need a third kind of default capture).
  • Dynamic allocation with user-specified alignment. Currently, the user can specify custom alignment for a type using alignas(N), but this does not affect dynamic allocation; the proposal makes it do so. EWG agreed that this should be fixed, but there were some concerns about backward-compatibility; the proposal author will iterate on the proposal to address these concerns.
  • The C++ standard library includes parts of the C standard library by reference. Currently, the referenced version of C is C99. EWG looked at a proposal to update the referenced version to C11. EWG encouraged this, and further suggested that the topic of compatibility between C++ and C in two areas, threads and atomics, be explored.
  • A proposal to obtain the ability the create opaque aliases (a form of typedefs that create a new type) by adding two new language features: function aliases, and allowing inheritance from built-in types. The idea is that the mechanism of creating an opaque alias would be derivation; to allow aliases of built-in types, inheritance from built-in types would be allowed. Function aliases, meanwhile, would provide a mechanism for lifting the operations of the base class into the derived class, so that if you e.g. inherit from int, the inherited operator + could be “aliased” to take and return your derived type instead. EWG liked the idea of function aliases, and encouraged developing it into an independent proposal. Regarding opaque aliases, EWG felt inheritance wasn’t the appropriate mechanism; in particular, deriving from built-in types opens up a can of worms (e.g. “are the operations on int virtual?”). Instead, wrapping the underlying type as a member should be explored as the mechanism for creating opaque aliases. This can already be done today, if you define all the wrapped operations yourself; the language should make that easier to do. It was pointed out that reflection may provide the answer to this.
  • A different approach to overloading “operator dot” where, in the presence of an overloaded operator ., obj.member would resolve to obj.operator.(f), where f is a compiler-generated function object that accepts an argument a of any type, and returns a.member. (Similarly, obj.func(args) would be transformed in the same way, with the function object returning a.func(args)). This proposal has more expressive power than the existing “operator dot” proposal, allowing additional patterns like a form of duck typing (see the paper for a list). EWG liked the abilities this would open up, but wasn’t convinced that “operator dot” was the right spelling for such a feature. In addition, it was pointed out that a lot of these abilities fall under the purview of reflection; EWG recommended continuing to pursue the idea in the Reflection Study Group.
  • A proposal to allow initializer lists to contain movable elements. EWG didn’t find anything objectionable about this per se, but didn’t feel it was sufficiently motivated, and encouraged the author to return with more compelling motivating examples.
  • A proposal to standardize a [[pure]] attribute, which would apply to a function and indicate that the function had no side effects. Everyone wants some form of this, but people disagree on the exact semantics, and how to specify them. The prevailing suggestion seemed to be to specify the behaviour in terms of the “abstract machine” (an abstraction used in the C++ standard text to specify behaviour without getting into implementation-specific details), and to explore standardizing two attributes with related semantics: one to mean “the function can observe the state of the abstract machine, but not modify it”, and another to mean “the function cannot observe the state of the abstract machine, except for its arguments”. To illustrate the difference, a function which reads the value of a global variable (which could change between invocations) would satisfy the first condition, but not the second; such a function has no side effects, but different invocations can potentially return different values. GCC has (non-standard) attributes matching these semantics, [[gnu::pure]] and [[gnu::const]], respectively.
  • A proposal to allow non-type template parameters of deduced type, spelt template . This was almost approved, but then someone noticed a potential conflict with the Concepts TS. In the Concepts TS, auto is treated as a concept which is modelled by all types, and in most contexts where auto can be used, so can a concept name. Extending those semantics to this proposal, the meaning of template ought to be “a template with a single non-type template parameter whose type is deduced but must satisfy the concept ConceptName“. The problem is, template currently has a different meaning in the Concepts TS: “a template with a single type template parameter which must satisfy ConceptName. EWG encouraged the proposal author to work with the editor of the Concepts TS to resolve this conflict, and to propose any resulting feature for addition into the Concepts TS.
  • A revised version of a proposal to allow template argument deduction for constructors. This would allow omitting the template argument list from the name of a class when constructing it, if the template arguments can be deduced from the constructor. The proposal contained two complementary mechanisms for performing the deduction. The first is to perform deduction against the set of constructors of the primary template of the class, as if they were non-member functions and their template parameters included those of the class. The second is to introduce a new construct called a “canonical factory function”, which would be outside the class, and would look something like this:

    template
    vector(Iter begin, Iter end) -> vector>;

    The meaning of this is “if vector is constructed without explicit template arguments and the constructor arguments have type Iter, deduce vector‘s template argument to be ValueTypeOf. The proposal author recommended allowing both forms of deduction, and EWG, after discussing both at length, agreed; the author will write standard wording for the next meeting.
  • Extend the for loop syntax to run different code when the loop exited early (using break) than when the loop exited normally. This avoids needing to save enough state outside the loop’s scope to be able to test how the loop exited, which is particularly problematic when looping over ranges with single-pass iterators. EWG agreed this problem is worth solving, but thought the proposal wasn’t nearly well baked enough. A notable concern was that the proposed syntax, if for (...) { /* loop body */} { /* normal exit block */ } else { /* early exit block */ } had the reverse of the semantics of python’s existing for ... else syntax, in which else denotes code to be run if the loop ran to completion.
  • A proposal to allow something like using-declarations inside attribtues. The proposed syntax was [[using(ns), foo, bar, baz]], which would be a shorthand for [[ns::foo, ns::bar, ns::baz]]. EWG liked the idea, but felt some more work was necessary to get the lookup rules right (e.g. what should happen if an attribute name following a using is not found in the namespace named by the using).
  • [[unused]], [[nodiscard]], and [[fallthrough]] attributes, whose meanings are roughly “if this entity isn’t used, that’s intentional”, “it’s important that callers of this function use the return value”, and “this switch case deliberately falls through to the next”. The purpose of the attributes is to allow implementations to give or omit warnings related to these scenarios more accurately; they all exist in the wild as implementation-specific attributes, so standardizing them makes sense. EWG liked these attributes, but slightly preferred the name [[maybe_unused]] for the first, as [[unused]] might wrongly suggest the semantics “this should not be used”. The notion that [[nodiscard]] should be something other than an attribute (such as a keyword) so that the standard can require a diagnostic if the result of a function so marked is discarded, came up but was rejected.
  • A proposal for de-structuring initialization, that would allow writing auto {x, y, z} = expr; where the type of expr was a tuple-like object, whose elements would be bound to the variables x, y, and z (which this construct declares). “Tuple-like objects” include std::tuple, std::pair, std::array, and aggregate structures. The proposal lacked a mechanism to adapt a non-aggregate user-defined type to be “tuple-like” and work with this syntax; EWG’s feedback was that such a mechanism is important. Moreover, EWG recommended that the proposal be expanded to allow (optionally) specifying types for x, y, and z, instead of having their types be deduced.
  • A paper outlining a strategy for unifying the stackless and stackful coroutine proposals. The paper argued that the stackless/stackful distinction is focusing in on one dimension of the design space (the call stack implementation), while there are a number of other dimensions, such as forward progress guarantees, thread-local storage, and lock ownership; it further observed that coroutines have a lot in common with threads, fibres, task-region task, and other similar constructs – collectively, “threads of execution” – and that unification should be sought across all these dimensions. EWG encouraged the author to come back with a fleshed-out proposal.
  • EWG looked at some design issues that came up during wording review of the default comparisons proposal. The most significant one concerned the name lookup rules for auto-generated comparisons. The current wording effectively lexically expands a comparison like a == b into something like a.foo == b.foo && a.bar == b.bar at each call site, performing lookup for each member’s comparison operator at the call site. As these lookups can yield different results for different call sites, the comparison can have different semantics at different call sites. People didn’t like this; several alternatives were proposed along the lines of generating a single, canonical comparison operator for a type, and using it at each call site. An updated proposal that formalizes one of these alternatives is expected at the next meeting.

Rejected proposals:

  • Making void be an object type, so that you can use it the way you can use any object it (that is, you can instantiate it, form a reference to it, and other things currently forbidden for void). The motivation was to ease the writing of generic code, which oftens needs to specialize templates to handle void correctly. EWG agreed the problem was worth solving, but didn’t like this approach; the strongest objections revolved around situations where the inability to use void in a particular way is an important safeguard, such as trying to delete a void*, or trying to perform pointer arithmetic on one, both of which would have to be allowed (or otherwise special-cased) under this scheme.
  • Extension methods, which would allow the first parameter of a non-member function to be named this, and allow calling such functions using the member function call syntax. This is essentially an opt-in version of the “x.f(y) can resolve to f(x, y)” half of the unified function call syntax proposal, which is the half that didn’t pass. It was shot down mostly for the same reasons that the full (no opt-in) version was (concerns about making code more brittle), but even proponents of a full unified call syntax opposed it because they felt an opt-in policy was too restrictive (not allowing existing free functions to be called with a member syntax), and that the safety objectives that motivated opt-in could be better accomplished through Modules.
  • A revised version of the generalized lifetime extension proposal (originally presented in Urbana) that would extend C++’s “lifetime extension” rules. These rules specify that if a local reference variable is bound to a temporary, the lifetime of the temporary (which would normally end at the end of the statement) is extended to match the lifetime of the local variable. The proposal would extend the rules to apply to temporaries appearing as subexpressions of the variable’s initializer, if the compiler can determine that the result of the entire initializer expression refers to one of these temporaries. To make this analysis possible for initializers that contain calls to separately-defined functions, annotations on function parameters (that essentially say whether the result of the function refers to the parameter) would be required. EWG’s view was that the problem this aims to solve (a category of subtle object lifetime issues) is not big enough to warrant requiring such annotations. It’s worth noting that Microsoft is developing a tool that would use annotations of a very similar kind (with well-chosen defaults to reduce verbosity) to warn about such object lifetime issues.
  • noexcept(auto). This wasn’t so much a rejection, as a statement from the proposal author that he does not intend to pursue this proposal further, in spite of EWG having consensus for it at the last meeting. A notable reason for this change in course was the observation that authors of generic code who would benefit most from this feature, often want the function’s return expression to appear in the declaration (such as in the noexcept-specification) for SFINAE purposes.
  • Relaxing a rule about a particular use of unions in constant expressions. If all union members share a “common initial sequence”, then C++ allows accessing something in this sequence through any union member, not just the active one. In constant expressions, this is currently disallowed; the proposal would allow it. Rejected because an implementer argued that it would have a significant negative impact on the performance of their constant expression evaluation.
  • Replacing std::uncaught_exceptions(), previously added to C++17, with a different API that the author argued is harder to misuse. EWG didn’t find this compelling, and there was no consensus to move forward with it.

A proposal for looping simultaneously over multiple ranges was not heard because there was no one available to present it.

Modules

I talked above about the target ship vehicle for Modules being a Technical Specification rather than C++17. Here I’ll summarize the technical discussion that led to this decision.

Modules enable C++ programmers to fundamentally change how they structure their code, and derive many benefits as a result, ranging from improved build times to better tooling. However, programmers often don’t have the luxury of making such fundamental changes to their existing codebases, so there is an enormous demand for implementations to support paradigms that allow reaping some of these benefits with minimal changes to existing code. A big open question is, to what extent should this set of use cases – transitioning existing codebases to a modular world – influence the design of the language feature as standardized, versus being provided for in implementation-specific extensions. Having different answers for this question is the largest source of divergence between the two current Modules implementations.

The most significant issue that this question manifests itself in, is whether modules should “carry” macros; that is, whether macros should be among the set of semantic entities that one module can export and another import.

There are compelling arguments on both sides. On the one hand, due to their nature (being preprocessor constructs, handled during a phase of translation when no syntactic or semantic information is yet available), macros hugely complicate the analysis of C++ code by tools. The vast majority of their uses cases now have non-preprocessor-based alternatives, from const variables and inline functions introduced back in the days of C, to reflection features like source code information capture (to replace things like __FILE__) being standardized today. They are widely viewed as a scourge on the language, and a legacy feature that has no place in new code and does not deserve consideration when designing new language features. As a result, many argue that Modules should not have first-class support for macros. This is the position reflected in Microsoft’s implementation. (It’s important to note that Microsoft’s implementation does have a mechanism for dealing with macros to support transitioning existing codebases (specifically, there exists a compiler flag that can be used when compiling a module that, in addition to generating a module interface file, generates a “side header” containing macro definitions that a consumer of the module can #include in addition to importing the module), but this is strictly an extension and not part of the language feature as they propose it.)

On the other hand, practically all existing large codebases include components that use macros in their interfaces – most notably system headers – and this is unlikely to change in the near future. (As someone put it, “no one is going to rewrite all of POSIX to not use macros any time soon”.) To allow modularizing such codebases in a portable way, many argue that it’s critical that Modules have first-class, standardized support for macros. This is the position reflected in Clang’s implementation.

Factoring into this debate is the state of progress of the two implementations. Microsoft claims to have a substantially complete implementation of their design (where modules do not carry macros), to be released as part of Visual C++ 2015 Update 1, and has submitted a paper with standard wording to the committee. The Clang folks have not yet written such a paper or wording for their design (where modules do carry macros), because they feel their implementation is not yet sufficiently complete that they can be confident that the design works.

EWG discussed all this, and recognized the practical need to support macros in some way, while also recognizing that there is a lot of demand to have some form of Modules available for people to use as soon as possible. A preference was expressed for standardizing Microsoft’s “ready to go” design in some form, while treating the additional abilities conferred by Clang’s design (namely, modules carrying macros) as a possible future extension. The decision to pursue Microsoft’s design as a Technical Specification rather than in C++17 was made primarily because at this stage, we cannot yet be confident that Clang’s design can be expressed as a pure, no-breaking-changes extension to Microsoft’s design. A Technical Specification comes with no guarantee that future standards will be compatible with it, making it the safer and more appropriate choice of ship vehicle.

Later in the week, there was an informal evening session about the implementation of Modules. Many interesting topics were brought up, such as whether the result of compiling a module is suitable as a distribution format for code, and what the implications of that are for things like DRM. Such topics are strictly out of scope of standardization by the committee, but it’s good to hear implementers are thinking about them.

Contracts

Two evening sessions were held on the topic of contract programming, to try to make progress towards standardization. Building on the consensus from the previous meeting that it should be possible to express contracts both in the interface of a function (such as a precondition stated in the declaration) and in the implementation (such as an assertion in the function body), the group tackled some remaining technical challenges.

The security implications of having a global “contract violation handler” which anyone can install, were discussed: it opens up an attack vector where malicious code sets a handler and causes the program to violate a contract, leading to execution of the handler. It was observed that two existing language features, the “terminate handler” (called when std::terminate() is invoked) and the “unexpected handler” (called when an exception is thrown during stack unwinding) have a similar problem, and protection mechanisms employed for those can be applied to the contract violation handler as well.

The thorniest issue was the interaction between a possibly-throwing contract violation handler and a noexcept function: what should happen if a noexcept function has a precondition, the program is compiled in a mode where preconditions are checked, the precondition is checked and fails, the contract violation handler is invoked, and it throws an exception? The possibility of “just allowing it” was considered but rejected, as it would be very problematic for noexcept to mean “does not throw, unless the precondition fails” (code relying on the noexcept would be operating on a possibly-incorrect assumption). The possibilities of not allowing contract violation handlers to throw, and of not allowing preconditions on noexcept functions, were also considered but discarded as being too restrictive. The consensus in the end was, noexcept functions can have preconditions, and contract violation handlers can throw, but if an exception is thrown from a contract violation handler while checking the preconditions of a noecept function, the program will terminate. This is consistent with the more general rule that if an exception is thrown from a noexcept function, the program terminates.

The question of whether a contract violation handler should be allowed to return, i.e. neither throw nor terminate but allow execution of the contract-violated function to proceed, came up, but there was no time for a full discussion on this topic.

Concepts Design Review

When the Concepts TS was balloted by national standards bodies, some of the resulting ballot comments were deferred as design issues to be considered for the next revision of the TS (or its merger into the standard, whichever happens first). EWG looked at these issues at this meeting. Here’s the outcome for the most notable ones:

  • A suggestion to remove terse notation (where void foo(ConceptName c) declares a constrained template function) was rejected on the basis that user feedback about this feature so far has been mostly positive.
  • A suggestion to remove one or more of the four current ways of declaring a constrained template function was rejected on the basis that no specific proposal as to what to remove has been made; the comment authors are welcome to write such a specific proposal if they wish.
  • Unifying the two ways of declaring concepts (as a variable and as a function) was discussed. EWG agreed that this is a worthy goal, but there is no specific proposal on the table. (“Just remove function concepts” isn’t a viable approach because variable concepts cannot be overloaded on the kind and arity of their template parameters, and such overloading is considered an important use case.)
  • Allowing the evaluation of a concept anywhere (such as in a static_assert or a constexpr_if), not just in a requires-clause, was approved.
  • A suggestion to add syntax for same-type constraints (in addition to the existing syntax for “convertible-to” constraints) was rejected on the basis that same-type constraints can easily be expressed as convertible-to constraints with the use of simple helper concept (e.g. { expr } -> Same).

See also the section above where I talk about the bigger picture about Concepts and the possibility of the TS being merged into C++17.

Library / Library Evolution Working Groups

Having spent practically all of my time in EWG, I didn’t have much of a chance to follow developments on the library side of things, but I’ll summarize what I’ve gathered during the plenary sessions.

I already listed library features accepted into C++17 at this meeting (and at previous meetings) above.

Library Fundamentals TS

The first revision of the Library Fundamentals TS was recently published and could be proposed for merger into C++17.

The second revision came into the meeting containing the following:


Matt Thompson: GitDone

Понедельник, 09 Ноября 2015 г. 12:15 + в цитатник

GitDone logo.001

GitDone aims to make GitHub Issues easier to use. It’s a simple browser extension that streamlines the GitHub Issues interface for new users and non-developers — and can add handy features and shortcuts that project managers and Task Rabbits will love.

GitDone was born at the 2015 Mozilla Festival. It was one of twelve ideas our session came up with for making GitHub easier and more powerful for new audiences. Phillip Smith and the heroic Darren Mothersele then spent the weekend hacking on an early prototype.

Right now it’s just a bare-bones proof of concept for developers. We’re hoping the next 0.2 release will start to deliver enough value that we can begin testing and human trials with actual project managers and novice GitHub Issues users.

Getting down with GitHub session

Who is GitDone for?

  • Project Managers
  • Non-technical users. People new to GitHub or GitHub Issues.
  • Anyone looking for a fast, easy task manager. Or an issue tracker that both technical and non-technical audiences will like and use.

Why GitDone?

Hi I'm a PM.001

In many ways, GitHub Issues is like a “product unto itself.” For a growing segment of users, it’s the only part of GitHub they regularly use.

Project managers like me and my team-mates at Mozilla love it. Of all the issue trackers out there (Bugzilla, Asana, etc.), it’s arguably the easiest to use. And it has a huge added benefit: it’s the same tool our developer colleagues are already using — a major bonus.

The challenge is…

  • GitHub can be intimidating for new users. On first use, GitHub looks and smells like a place that’s “just for developers.” That can make it harder for project managers to explain and get their non-technical team-mates to embrace.
  • GitHub’s language and terminology is foreign. For project managers or people just trying to get tasks done, terms like “repos” and “pull requests” don’t make sense.
  • Routine tasks take too many clicks. It’d be nice to get common features like “create new task” or “see all my tasks” into a toolbar that’s always close to hand. And some features are hard to find in the current GitHub interface. (e.g., “How do I add a new label?”)

By hiding some of GitHub’s complexity for “Task Rabbits,” our hope is that we can make it easier for new users to understand and get started. We can also make UI tweaks, translate terms and add shortcuts that make project managers’ lives easier.

distracting.001

GitDone 0.1 and beyond

We just released the GitDone 0.1 “cupcake” release — a bite-sized proof of concept. We’d love your ideas and help with release 0.2, which we can then test with some real-life project managers, get feedback, and see whether GitDone 0.3 should… get done.

GitDone 0.1 cupcake.001

GitDone 0.1 focused on:

  • Removing stuff that’s confusing. Hide parts of the GitHub Issues interface that are confusing for new users.
  • Providing some tool buttons for commonly performed tasks. A lot of value can be provided by just by pulling helpful GitHub functions and links into a toolbar.

GitDone 0.2 will:

  • Add stuff that’s helpful. Add elements to the interface that project managers will love. (Like: displaying labels and milestones right on the project page, adding a “new label” button right on the label list, etc.)
  • Then: test with humans. Gather feedback. Decide whether / what for release 0.3

What’s the ultimate goal?

  1. Make a simpler issues management experience
  2. Provide a gateway drug / on-ramp into the rest of GitHub for new users
  3. Help non-developers interact with developers productively on a shared platform

http://openmatt.org/2015/11/09/gitdone/


Daniel Pocock: debian.org RTC: announcing XMPP, SIP presence and more

Понедельник, 09 Ноября 2015 г. 10:57 + в цитатник

Announced 7 November 2015 on the debian-devel-announce mailing list.

The Debian Project now has an XMPP service available to all Debian Developers. Your Debian.org email identity can be used as your XMPP address.

The SIP service has also been upgraded and now supports presence. SIP and XMPP presence, rosters and messaging are not currently integrated.

The Lumicall app has been improved to enable rapid setup for Debian.org SIP users.

This announcement concludes the maintenance window on the RTC services. All services are now running on jessie (using packages from jessie-backports).

XMPP and SIP enable a whole new world of real-time multimedia communications possibilities: video/webcam, VoIP, chat messaging, desktop sharing and distributed, federated communication are the most common use cases.

Details about how to get started and get support are explained in the User Guide in the Debian wiki. As it is a wiki, you are completely welcome to help it evolve.

Several of the people involved in the RTC team were also at the Cambridge mini-DebConf (7-8 November).

The password for all these real time communication services can be set via the LDAP control panel. Please note that this password needs to be different to any of your other existing debian.org passwords. Please use a strong password and please keep it secure.

Some of the infrastructure, like the TURN server, is shared by clients of both SIP and XMPP. Please configure your XMPP and SIP clients to use the TURN server for audio or video streaming to work most reliably through NAT.

A key feature of both our XMPP and SIP services is that they support federated inter-connectivity with other domains. Please try it. The FedRTC service for Fedora developers is one example of another SIP service that supports federation. For details of how it works and how we establish trust between domains, please see the RTC Quick Start Guide. Please reach out to other communities you are involved with and help them consider enabling SIP and XMPP federation of their own communities/domains: as Metcalfe's law suggests, each extra person or community who embraces open standards like SIP and XMPP has far more than just an incremental impact on the value of these standards and makes them more pervasive.

If you are keen to support and collaborate on the wider use of Free RTC technology, please consider joining the Free RTC mailing list sponsored by FSF Europe. There will also be a dedicated debian-rtc list for discussion of these technologies within Debian and derivatives.

This service has been made possible by the efforts of the DSA team in the original SIP+WebRTC project and the more recent jessie upgrades and XMPP project. Real-time communications systems have specific expectations for network latency, connectivity, authentication schemes and various other things. Therefore, it is a great endorsement of the caliber of the team and the quality of the systems they have in place that they have been able to host this largely within their existing framework for Debian services. Feedback from the DSA team has also been helpful in improving the upstream software and packaging to make them convenient for system administrators everywhere.

Special thanks to Peter Palfrader and Luca Filipozzi from the DSA team, Matthew Wild from the Prosody XMPP server project, Scott Godin from the reSIProcate project, Juliana Louback for her contributions to JSCommunicator during GSoC 2014, Iain Learmonth for helping get the RTC team up and running, Enrico Tassi, Sergei Golovan and Victor Seva for the Prosody and prosody-modules packaging and also the Debian backports team, especially Alexander Wirt, helping us ensure that rapidly evolving packages like those used in RTC are available on a stable Debian system.

http://danielpocock.com/debian.org-rtc-announcing-xmpp-sip-presence-and-more


This Week In Rust: This Week in Rust 104

Понедельник, 09 Ноября 2015 г. 08:00 + в цитатник

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

This week's edition was edited by: nasa42, brson, and llogiq.

Updates from Rust Community

News & Blog Posts

Notable New Crates & Projects

  • Organn. A simple drawbar organ in Rust.
  • libloading. A safer binding to platform’s dynamic library loading utilities.

Updates from Rust Core

104 pull requests were merged in the last week.

See the triage digest for more details.

Notable changes

New Contributors

  • Amanieu d'Antras
  • Amit Saha
  • Bruno Tavares
  • Daniel Trebbien
  • Ivan Kozik
  • Jake Worth
  • jrburke
  • Kyle Mayes
  • Oliver Middleton
  • Rizky Luthfianto

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

fn work(on: RustProject) -> Money

Tweet us at @ThisWeekInRust to get your job offers listed here!

Crate of the Week

This week's Crate of the Week is ramp. Ramp supplies some high performance low memory easy to use big integral types.

Whenever you need integers too large for a u64 and cannot afford to lose precision, ramp has just what you need.

Thanks to zcdziura for this week's suggestion. Submit your suggestions for next week!

Quote of the Week

with unsafe .... if you have to ask, then you probably shouldn't be doing it basically

— Steve Klabnik on #rust IRC.

Thanks to Oliver Schneider for the tip.

Submit your quotes for next week!

http://this-week-in-rust.org/blog/2015/11/09/this-week-in-rust-104/


Air Mozilla: Mosfest 2015: Second Day Closing

Воскресенье, 08 Ноября 2015 г. 20:00 + в цитатник

Mosfest 2015: Second Day Closing The wrap-up session for Mozfest 2015, from Ravensbourne College in London.

https://air.mozilla.org/mosfest-2015-second-day-closing/


Air Mozilla: Doc Searles: Giving Users Superpowers

Воскресенье, 08 Ноября 2015 г. 17:45 + в цитатник

Doc Searles: Giving Users Superpowers David "Doc" Searls (born July 29, 1947), co-author of The Cluetrain Manifesto and author of The Intention Economy: When Customers Take Charge, is an American...

https://air.mozilla.org/doc-searles-giving-users-superpowers/


Christian Heilmann: [Excellent talks] “OnConnectionLost: The life of an offline web application” at JSConf EU 2015

Воскресенье, 08 Ноября 2015 г. 17:10 + в цитатник

I spend a lot of timing giving and listening to talks at conferences and I want to point out a few here I enjoyed.

At JSConfEU this year Stefanie Grewenig and Johannes Th"ones talked about offline applications:

I thoroughly and utterly enjoyed this talk. Not only because their timing worked really well and the handover from presenter to presenter went smoothly. I was most impressed to see an offline matters talk based on project/customer delivery data instead of the ones we normally get. Most offline talks explain the need, show the technology and ask for us to get cracking. This one got cracking and showed how things were done and what problems you run into.

The slides are beautiful, the storyline makes a lot of sense and at no time you feel condescended to. The talk also shows that some “impossible to use in production” technologies like DOM storage do work if you use them in a sensible fashion.

As a bonus – it has the cutest rhino at 11:55:

rhino cartoon

Double this with Nolan Lawson’s “IndexedDB, WebSQL, LocalStorage – what blocks the DOM?” and you learn a lot about local storage issues and solutions in a very short amount of time.

Thanks Stefanie, Johannes and Nolan. I feel cleverer now.

https://www.christianheilmann.com/2015/11/08/excellent-talks-onconnectionlost-the-life-of-an-offline-web-application-at-jsconf-eu-2015/


Air Mozilla: MozFest Airship Camera

Воскресенье, 08 Ноября 2015 г. 14:00 + в цитатник

MozFest Airship Camera This is a feed from the camera aboard the Air Mozilla Blimp. This stream will pop-up from time-to-time during MozFest.

https://air.mozilla.org/airship2015/


Daniel Stenberg: TCP tuning for HTTP

Воскресенье, 08 Ноября 2015 г. 02:17 + в цитатник

I’m the author of a brand new internet-draft that I submitted just the other day. The title is TCP Tuning for HTTP,  and the intent is to gather a set of current best practices for HTTP implementers; to share and distribute knowledge we’ve gathered over the years. Clients, servers and intermediaries. For HTTP/1.1 as well as HTTP/2.

I’m now awaiting, expecting and looking forward to feedback, criticisms and additional content for this document so that it can become the resource I’d like it to be.

How to contribute to this?

  1.  ideally, send your feedback to the HTTPbis mailing list,
  2. or submit an issue or pull-request on github for the draft.md
  3. or simply email me your comments: daniel haxx.se

I’ve been participating first passively and more and more actively over the years within the IETF, mostly in the HTTPbis working group. I think open protocols and open standards are important and I like being part of making them reality. I have the utmost respect and admiration for those who are involved in putting the RFCs together and thus improve the world we live in, step by step.

For a long while I’ve been wanting  to step up and “pull my weight” too,  to become a better participant in this area, and I’m happy to now finally take this step. Hopefully this is just the first step of many more to come.

(Psssst: While gathering feedback and updating the git version, the current work in progress version of the draft is always visible here.)

http://daniel.haxx.se/blog/2015/11/08/tcp-tuning-for-http/


Air Mozilla: Mozilla Festival 2015 First Day Close

Суббота, 07 Ноября 2015 г. 20:00 + в цитатник

Mozilla Festival 2015  First Day Close Saturday afternoon wrap-up session of the first day of MozFest 2015 at Ravensbourne College, London.

https://air.mozilla.org/mozilla-festival-2015-part-2/


Hanno Schlichting: Mozilla Location Service - The What, Why and Privacy

Суббота, 07 Ноября 2015 г. 15:23 + в цитатник
I'm currently working on the Mozilla Location Service and have been doing so for the last quarters. The project was finally announced more broadly over at the Mozilla Cloud Services blog in late October. Since then we've had a steady amount of interest and a number of repeating questions coming in. I'd like to take this opportunity to give more background information on why we are interested in location services and address some questions about the way we do things. This area is extremely complex, so bear with me while I provide a lot of background information.

Note that this is my personal opinion and not an official position of Mozilla.

What is it?


There's a lot of interest in "the location space" and lots of news, start-ups and competition around location, mobile location, location context, maps and all things related. We aren't investing broadly in "all things location", but are focused on one very concrete aspect: The ability of a device to determine its own position as represented by a latitude and longitude. A device will use a variety of sensors to determine information about its environment and do some calculation to derive a position from those. In most cases a remote service will be queried to provide extra data to aid in the calculation or large parts of the calculation will be done remotely.

As a concrete example a mobile phone might be connected to a cell tower. Using this information it will query a remote service. The remote service has a database of all cell towers mapped to positions and can return the position of the cell tower as an approximation of the device position.

The sensors typically used for this purpose include a satellite based positioning sensor (GPS, Glonass, ...) and data for visible cell towers, WiFi access points and Bluetooth LE beacons. Other sensors like the compass or accelerometer can be used to determine relative position changes. There are even scientific experiments to use light sensors and variations in the magnetic field as sources of location information. If multiple sensors deliver data, these can be combined to provide more precise positions or positions with a smaller error margin or better accuracy. For some sensors additional data like signal strengths, signal noise or time of flight data can be used to narrow down the position.

What it is not!


The Mozilla Location Service only deals with determining a device position. It doesn't and won't deal with services like maps, points of interest data, transportation or traffic information. For some of these OpenStreetMap is an established open community and it doesn't make sense for us to compete with them. In addition there are also a good number of commercial offerings for these higher level services and both users and developers have a good number of choices here. Different Mozilla products might partner with both communities and companies to provide user value and bundle their services - those questions are outside of our scope here.

Our location service helps with what you might call device initiated positioning. It is up to the device and ultimately the user to decide if a positioning request should be done. Separate from this are network or operator based positioning methods used for mobile phones. As part of being connected to a cell network, the cell network operator maintains a database of which phone is currently connected to which cell tower. This is required to route incoming phone calls or SMS to the right tower and ultimately the phone itself. Depending on the cell standard a phone might maintain connections to multiple cell towers. You can find out more about this on the radiolocation wikipedia page.

These operators based methods don't require additional user consent and work as a side-effect of having a cell connection. The only way to opt-out of these is by not using phones. In many countries operators are required to provide this data to government agencies. There are two common examples of this. One is emergency services (for example E-911) in which the emergency call is directed to the geographically closest response center and the response center gets the exact location of the caller to quickly dispatch emergency help to the right location. Even if the caller is unsure of his or her location or confused or distressed.

The other much more controversial case is data retention in which operators have to store and share metadata about calls and phone positions for a lengthy period of months to years. There are also examples of intelligence services capturing this information on sometimes questionable legal grounds. These operator based methods are completely outside the control and scope of our location service.

Why do it at all?


There are plenty of examples where users are willing to have their devices determine their location. There always has to be and will be an explicit user choice in this, but most people find some of the use-cases compelling enough to allow this.

Some of the use-cases are obvious and others not so much. One of the earliest examples is being able to pin-point the current location on a map, for example to show points of interest like coffee shops around the current location, or show the transit data for the nearest bus stop. Another popular example is continuous location updates for driving directions or capturing walking or bike trails as part of fitness apps. In the future this will be extended to indoor location and for example being able to get walking directions to a specific shop inside a large shopping center, including hints to take escalators down or up various levels.

Another not quite so obvious use-case is taking photos and automatically recording the position in the photos metadata. Or a service to "find my device" - either when a user lost it or fears it has been stolen. Maybe a user is also willing to share his or her location with a trusted group of people, for example as a parent I might want to be able to tell the location of my kids phones. Or whenever I'm near a friend of mine, I'd like to be alerted so we don't miss each other. These cases certainly aren't for everyone and require strong and explicit user consent or consensus in the family.

Even more recent examples are what's technically called "geo-fencing" and the use of Bluetooth low energy beacons. These allow the user or apps on the users behalf to take some action when the user leaves or arrives at a certain area. A common example is being reminded to call someone or a reminder to go shopping once you leave work. If the user opts-in this might also be used to show interesting offers or ads when the user is near a favorite coffee shop or next to a specific section inside a store, or interact with a museum exhibit once the user gets close to it. A largely unexplored field is various sorts of augmented reality games, where players have to move around in the real world, but the real world is overlaid with game mechanics.

In addition to those user level use-cases there's also at least one technical use-case most people are unaware of. Using a satellite based sensor like the GPS completely on its own provides a rather poor experience, as the chip needs up to 10 minutes to provide the first location. If you owned an older standalone GPS navigation system, you might remember the long waiting times on first use. In order to improve this, all modern sensors use some form of assisted GPS, in which other sensors are used to tell the GPS in which general area it is and based on that what satellites it should see. It also uses a faster data link to download information about the exact orbit of satellites. Currently this support is outside the scope of our location service and provided by partners via the mobile "secure user plane location" standard.

Currently we also don't deal with IP address based location or Geo-IP, which is commonly used to either direct website visitors to a regional part of the website or restrict content if distribution rights aren't available for all countries.

Why Mozilla?


As the use-cases in the previous section should have shown there is tremendous user interest in this. As a result any platform needs to offer the capability to position users. For us this means we need to support these use-cases for the web platform, independent of whether you are running Firefox on top of another operating system (Windows, Linux, Mac OS, Android, etc.) or directly (Firefox OS). In terms of official developer API's the W3C Geolocation working group has defined a geolocation API and is being re-chartered to add new API's for geo-fencing and indoor location. These are developer visible API's, but they don't define how the underlying browser gets the required data or interacts with device sensors or an operating system.

Furthermore it should have become clear that positioning needs an external service component, even for seemingly standalone sensors like GPS. We currently have to either use the underlying operating system services or use commercial partners to provide these service components. There is little competition in the market and we aren't in a good position to influence terms of service on our users behalf, especially in the area of privacy. In addition cost of entry into this specific market is extremely high, as collecting a global data set and keeping it up-to-date requires a lot of resources. As a result there are few commercial companies trying to gain market entry, service costs are high and no open community project emerged that covers the whole world and combines all of the required data types into one consistent experience. We maintain a list of some of the projects we know about and most are practically constrained to certain regions or data types like only cell or only WiFi data.

When we started out to investigate the problem space, we were aiming to create a public data set and not just a public service. Unfortunately over time we discovered more and more legal and privacy problems and we haven't found a solution for those yet.

Privacy


Privacy is a serious concern while dealing with location data. As part of our location service we have to deal with the privacy concerns of three different actors. The first is the normal end-user of the service, who uses it to determine his or her position. The second is any user who chooses to contribute back data to the service and improve the underlying database for the benefit of all users. The third is the owner of the equipment used as an information source, like cell towers, WiFi access points or Bluetooth beacons.

For another much shorter take on privacy, you can read Gerv's much shorter blog post from late October.

Privacy - WiFi interlude


Especially for owners of WiFi access points we have some interesting challenges. The service uses a unique identifier for each WiFi access point in the form of the BSSID. This is a technical identifier which is most often the MAC address of the underlying network interface, but can also be a randomized number for ad-hoc networks. In addition to this technical id WiFi networks also have a human readable name (SSID). You see the name whenever choosing what WiFi to connect to.

The BSSID is a unique device id and never changes. What's worse is that many modern smart phones not only connect to WiFi networks, but also act as WiFi access points of their own, to share their internet connection with your other devices like a laptop or tablet. Using WiFi access points as part of a location service isn't ideal, both for these privacy concerns and due to technical limitations. The existing protocols where never meant for this use-case and thus have many shortcomings. For instance there is no way to distinguish a phone from a stationary access point, while observing it only once. But WiFi access points are the only wildly deployed type of device that can be used and provides a precise enough position for many of the current use-cases. Changes to wireless networking standards take years to agree upon and many more years to be available in a majority of deployed devices. So workarounds based on current standards are the only viable mid-term option.

Privacy continued


If you haven't followed tech news closely over the years, you might have missed a couple of the early problems and revelations about location services by other companies. Starting in 2010 a variety of news stories, governmental and data privacy agency actions took place. It's valuable to look a bit more closely at the history here and learn from these lessons.

Two good overviews of all these come from CNET in a roundup article from July 14, 2011 and another article from September 1, 2011. There where four interesting cases.

Apple and the large cache


Researchers found that iOS devices stored a year long detailed list of all the places the device had been in a local cache file. The concerns where addressed in the end and Apple provided a detailed Q&A on its location data practices. At the end of the article three concrete actions are detailed: 1. Only maintain a location cache for a short period of time. The cache is useful to improve the experience and allow device local look-ups for frequently visited locations like your home or workplace. Android was found to have a similar cache with a much shorter time period. 2. Don't backup this cache to any external device or cloud service. 3. Delete the cache if the user opts-out of using location services (and never maintain it without user consent in the first place).

Google, Microsoft and the single WiFi look-up


In June 2011 a CNET article appeared describing how researchers probed Google's location service and where able to lookup smart phones or personal WiFi routers over time and track them. Later in August 2011 a similar CNET article covered Microsoft and an article in Ars Technica specifically covered Microsoft's response. The immediate result were two new restrictions to the services. On one hand greater care was taken to filter out constantly moving WiFi access points, as these were likely smart phones and useless for determining a users position. On the other hand a "you need to know two" restriction was added. The idea here is to only answer service requests, if you can provide information about two nearby WiFi networks. The assumption is that you can only reasonably get two matching data points if you are indeed near those WiFi access points. This should make it considerably harder to do remote tracking of anyone. If you already know where someone is, potentially confirming this result via the service is much less of a problem than tracking the user in the first place.

Google street view cars, wiretapping and opt-out


As part of building up a database of street view imagery, Google also collected additional data while driving around. There's an original Google statement from April 2010 and an update from May 2010 in which Google admitted to collecting not only metadata about WiFi access points like everyone else, but also samples of the network traffic of unencrypted WiFi networks.

There's so far been no regulation or statements which would disallow the metadata collection, but the collection of network traffic has seen various legal actions. In the US the court cases over wiretapping charges are still going on, with some details in a recent Wired article from September 2013. In the European Union cases by various data protection agencies have already been resolved, as for example confirmed by a report by the Dutch DPA from 2012.

While the wiretapping charges are unrelated to our location service, the initial concerns have led to the creation of an opt-out approach for owners of WiFi access points. A Google blog post from November 2011 explains their "_nomap" suffix approach. This approach lets any WiFi access point owner signal his intent to avoid tracking. This is done by changing the user visible WiFi name (SSID) and append a "_nomap" string. Changing a user visible name isn't ideal, but unfortunately it's the only user changeable setting in WiFi standards. This approach is similar to efforts like "do not track", which rely on a combination of a user signal and the industry respecting this signal.

Unfortunately there's so far no industry standard for WiFi tracking. A manual opt-out approach was chosen by Microsoft instead. Other competitors have so far not offered any opt-out approach at all.

For our own location service we have chosen to respect the "_nomap" signal. It has the advantage of only requiring a single user action, instead of each user having to track down all possible location services and dealing with lots of companies. Not to mention having to keep track of any new company entering the field. In addition we are still discussing if and how to implement the manual opt-out approach.

Apple and the randomized BSSID


There is one case that isn't documented very well and only mentioned in various support forums. With the release of iOS 5 in late 2011 users reported a variety of problems with WiFi connectivity. Some users suggest that Apple actually made at least one intended change and switched its mobile hotspot from using normal infrastructure mode to ad-hoc mode. In infrastructure mode the WiFi is using the underlying MAC address as the BSSID. In ad-hoc mode it sets two bits of the BSSID to mark it as ad-hoc and than generates a random bit sequence for the rest. The device can than regenerate a randomized id once in a while.

If this were true, at first glance it would give a neat solution to the live-long unique id problem. By using ever changing randomized ids, the tracking potential for phones would be limited a lot and reduced to a much shorter time span. Since the BSSID contains two magic bits to mark it as ad-hoc, it would be easy for any location service to exclude these WiFi networks from its databases. Since they are only set on phones or otherwise ad-hoc and moving WiFi networks, it's in the best interest of any location service to filter them out, since they don't provide any value. Independent of whether this story is true or not, both the code deployed on Google's street view cars and our own clients filter out ad-hoc networks.

According to the forum reports, Apple however had to revert this change to satisfy users. The problem that occurred was that most operating systems remember WiFi networks not only by the clear text name, but also by their BSSID. Laptop users who wanted to use their trusted mobile phone hotspot were suddenly constantly presented with a dialog to re-enter the WiFi password, as the operating system thought these were new and never seen before devices. This user frustration apparently trumped a privacy win at the time.

It's unclear how much of this story is true. But it highlights one possible avenue for improving privacy of mobile phone owners. Either operating systems could be adjusted to keep remembering WiFi networks independent of their BSSID. Or there could be an acceptable trade-off, where the BSSID would only change every couple of days or weeks. Still narrowing down the tracking window from forever to a much shorter time span.

Privacy - summary


What most of the historical cases have in common is adding protections which only work in a service model, where access to the underlying data can be restricted or data can be removed from the database. At the start of this project we wanted to make the underlying data source publicly available in as much detail as possible. But the protections around Wifi access points that other companies have found and data privacy agencies have agreed upon, don't work for a public database.

So while we currently don't know of any model to share the WiFi data and protecting privacy, you might argue that the same problem doesn't apply to cell data or other types of data. Unfortunately there's a different group of users whose privacy is a concern here. The users deciding to contribute to the service should be protected as well. As long as there is just one or very few users in any geography, those location samples can rather easily be tied back to individual users, even without Mozilla tagging the data with unique user ids. And we made sure to never store the location data samples with a user id in our current system. We keep our optional leader board system completely separate from the location data.

There's research that convincingly shows that once you add user ids, you only need a very small number of data samples to uniquely identify people. Basically most people always travel the same routes and stay at work and home for long periods of time. Once I know both a home and work address, it's easy enough to match this data with other data sources. Now while we don't add user ids, we aren't clear on whether or not any of the data that is sent, is unique enough to still identify users. This starts with having data about the country and mobile carrier and the user cell standard. Combine that with potential device specific patterns in the reported signal strength or other measures, and there might well be a way to find identifying marks. For instance the first LTE user in any area would be easy to identify, or a laptop or tablet user would likely show different signal strength readings. At this stage we don't know enough to decide this. Once we publish the data there is no way of going back and the data is available for all future people to analyze.

Final words


I hope this sheds some more light on why we are so extremely cautious and don't have any good answers on questions like: "are you going to release the data" or "what license is the data available under". We set out to improve user privacy as one of our main goals. It might be that this goal can only be achieved by keeping the data private and identify a trusted organization or a trusted group of organizations and companies to watch over this data - assuming we can find an acceptable set of terms that indeed do protect users. Other sources of concerns are changing legal requirements, which might get us into trouble for hosting this data. And there is a real problem of brand and trust damage that would hinder our mission, if Mozilla is suddenly a company that allowed someone to track other users and the potential harm that can result from it.

I could go on about the technical difficulties on extracting on processing this data or the challenge of creating a global data set. But I'll leave those for another time.

If you want to correct me on any of the statements in this blog post, please leave a comment. If you want to engage with us, we'd love to hear from you on IRC or our mailing list.

Hanno

http://blog.hannosch.eu/2013/12/mozilla-location-service-what-why-and.html


Air Mozilla: MozFest B-Roll

Суббота, 07 Ноября 2015 г. 14:30 + в цитатник

MozFest B-Roll This is a feed of random Mozfest happenings between plenary sessions.

https://air.mozilla.org/mozfest-b-roll/


Air Mozilla: Mozilla Festival 2015 Opening Session

Суббота, 07 Ноября 2015 г. 12:00 + в цитатник

Mozilla Festival 2015  Opening Session The opening session of MozFest 2015 from Ravensbourne College, London, England.

https://air.mozilla.org/mozilla-festival-2015-opening-session/


Support.Mozilla.Org: What’s up with SUMO – 6th November

Пятница, 06 Ноября 2015 г. 21:15 + в цитатник

Hello, SUMO Nation!

Welcome to November… and version 42 of everything, including Firefox ;-). Have you already updated yours? We hope you did. In case you’re still wondering if you should…. Take a look here!

Welcome, new contributors!

If you joined us recently, don’t hesitate – come over and say “hi” in the forums!

Contributors of the last week

  • Daksh (Satyadeep) for passing 2000 Army of Awesome contributions in the past 3 months – amazing!
  • Kalpit Muley, for passing 1000 AoA contributions in the past 3 months – whoa!

We salute you!

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Buddies of the month

Vnisha and Rahul are our Buddies of the Month for October 2015! Hooray for being awesome! We’ll contact you soon about something cool for you :-)

Last SUMO Community meeting

Reminder: the next SUMO Community meeting…

  • …is going to take place on Monday, 2nd of November. Join us!
  • If you want to add a discussion topic to upcoming the live meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Monday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).

Developers

Community

Support Forum

Firefox

  • for Android
    • Version 42 is here! (featuring voice input, QR codes, tracking protection in private browsing, and family friendly browsing for tablets only)
  • Firefox OS
    • The documenation process for version 2.5 has been initialized… prepare for battle stations ;-)

https://blog.mozilla.org/sumo/2015/11/06/whats-up-with-sumo-6th-november/


Air Mozilla: Firefox Design Workshop Presentation (Fall Semester 2015)

Пятница, 06 Ноября 2015 г. 17:30 + в цитатник

Firefox Design Workshop Presentation (Fall Semester 2015) https://public.etherpad-mozilla.org/p/hfg-workshop-ws2015

https://air.mozilla.org/firefox-design-workshop-presentation-fall-semester-2015/


David Rajchenbach Teller: Designing the Firefox Performance Monitor (2): Monitoring Add-ons and Webpages

Пятница, 06 Ноября 2015 г. 16:56 + в цитатник

In part 1, we discussed the design of time measurement within the Firefox Performance Monitor. Despite the intuition, the Performance Monitor had neither the same set of objectives as the Gecko Profiler, nor the same set of constraints, and we ended up picking a design that was not a sampling profiler. In particular, instead of capturing performance data on stacks, the Monitor captures performance data on Groups, a notion that we have not discussed yet. In this part, we will focus on bridging the gap between our low-level instrumentation and actual add-ons and webpages, as may be seen by the user.

I. JavaScript compartments

The main objective of the Performance Monitor is to let users and developers quickly find out which add-ons or webpages are slowing down Firefox. The main tool of the Performance Monitor is an instrumentation of SpiderMonkey, the JavaScript VM used by Firefox, to detect slowdowns caused by code taking too long to execute.

SpiderMonkey is a general-purpose VM, used in Firefox, Thunderbird, but also in Gnome, as a command-line scripting tool, as a test suite runner and more. Out of the box, SpiderMonkey knows nothing about webpages or add-ons.

However, SpiderMonkey defines a notion of JavaScript Compartment. Compartments were designed to provide safe and manageable isolation of code and memory between webpages, as well as between webpages and the parts of Firefox written in JavaScript. In terms of JavaScript, each compartment represents a global object (typically, in a webpage, the window object), all the code parsed as part of this object, and all the memory owned by either. In particular, if a compartment A defines an event listener and attaches it to an event handler offered through some API by another compartment B, the event handler is still considered part of A.

Compartments do not offer a one-to-one mapping to add-ons or webpages, but they are close. We just need to remember a few things:

  • some compartments belong neither to an add-on, nor to a webpage (e.g. the parts of Firefox written in JavaScript);
  • each add-on can define any number of modules and worker threads, each of which its own compartment;
  • each webpage can define any number of frames and worker threads, each of which has its own compartment;
  • there are a number of ways to create compartments dynamically.

In addition, while Firefox executing JS code, it is possible to find out whether this code belongs to a window, using xpc::CurrentWindowOrNull(JSContext*). This information is not available to SpiderMonkey, but it is available to the embedding of SpiderMonkey, i.e. Firefox itself. Using a different path, one can find out whether an object belongs to an add-on – and, in particular, if the global object of a compartment belongs to an add-on – using JS::AddonIdOfObject(JSObject*).

Putting all of this together, in terms of JavaScript, both add-ons and web pages are essentially groups of compartments. We call these groups Performance Groups.

II. Maintaining Performance Groups

We extend SpiderMonkey with a few callbacks to let it grab Performance Groups from its embedding. Whenever SpiderMonkey creates a new Compartment, whether during the load of a page, during that of an add-on, or in more sophisticated dynamic cases, it requests the list of Performance Groups to which it belongs.

static bool
GetPerformanceGroupsCallback(JSContext* cx,
                             Vector&,
                             void* closure);

Attaching performance groups to a compartment during creation lets us ensure that we can update the performance cost of a compartment in constant-time, without complex indirections.

In the current implementation, a compartment typically belongs to a subset of the following groups:

  • its own group, which may be used to track performance of the single compartment;
  • a group shared by all compartments in the add-on on the current thread (typically, several modules);
  • a group shared by all compartments in the webpage on the current thread (typically, several iframes);
  • the “top group”, shared by all compartments in the VM, which may be used to track the performance of the entire JavaScript VM – while this has not always been the case, this currently maps to a single JavaScript thread.

Note that a compartment can theoretically belong to both a webpage and an add-on, although I haven’t encountered this situation yet.

As we saw in part 1 of this series, we start and stop a stopwatch to measure the duration of code execution whenever we enter/leave a Performance Group that does have a stopwatch yet. Consequently, each JavaScript stack has a single “top” stopwatch, which serves both to measure the performance of the “top group” and the performance of whichever JS code lies on top of the stack.

For performance reasons, groups can be marked as active or inactive, where inactive groups do not need a stopwatch. In a general run of Firefox, all the “own groups”, specific to a single compartment each, are inactive to avoid having to start/stop too many stopwatches at once and to commit too many results at the end of the event, while all the other groups are active. Own groups can be activated individually when investigating a performance issue, or to help tracking the effect of a module.

Note that we do not have to limit ourselves to the above kinds of groups. Indeed, we have plans to provide additional groups in the future, to be able to:

  • turn on/off monitoring of entire features implemented in JavaScript;
  • inspect the performance effect of entire content domains (e.g. all “facebook.com” pages, or all Google “+1” buttons);

In a different embedding, for instance an operating system, one could envision envision a completely different repartition of performance groups, such as a group shared by all services acting on behalf of of a single user.

III. Threads and processes

Nowadays, Firefox Nightly is a multi-threaded, multi-process application. Firefox Release has not reached that point yet, but should within a few versions. As defined above, performance groups cross neither threads nor processes.

As of this writing, we have not implemented collection of data from various threads, as the information is not as interesting as one could think. Indeed, in SpiderMonkey, a single non-main thread can only contain a single compartment, and it is difficult to impact the framerate with a background thread. Other tools dedicated to monitoring threads would therefore be better suited than the mechanism of Performance Groups.

On the other hand, activity across processes can cause user-visible jank, so we need to be able to track it. In particular, a single add-on can have performance impact on several processes at once. For this reason, the Performance Monitor is executed on each process. Higher-level APIs provide two ways of accessing application-wide information.

1/ Polling

The first API implements polling, as follows:

Task.spawn(function*() {
  // We are interested in jank and blocking cross-process communications.
  // Other probes do not need to be activated on behalf of of this monitor.
  let monitor = PerformanceStats.getMonitor([“jank”, “cpow”]);

  // Collect data from all processes. Dead or frozen processes are ignored.
  let snapshot = yield monitor.promiseSnapshot();

  // … wait

  // Collect data, once again. Again, dead or frozen processes are ignored.
  let snapshot2 = yield monitor.promiseSnapshot();

  // Compute the resource usage between the two timestamps.
  let delta = snapshot2.subtract(snapshot);

  let myAddon = delta.addons.get(“foo@bar”);

  // `durations` recapitulates the frame impact of `myAddon` during the interval
  // as an array containing the number of times we have missed 1 frame, 2 successive frames, 4 successive frames, ...
  let durations = myAddon.durations;
  console.log(“Jank info”, durations);
});

The underlying implementation of this API is relatively straightforward:

  1. in each process, the Performance Stats Service collects all the data at the end of each event, updating `durations` accordingly;
  2. when `promiseSnapshot` is called, we broadcast to all processes, requesting the latest data collected by the Performance Stats Service;
  3. if an add-on appears in several processes, we sum the resource impact and collapse the add-on data into a single item.

Polling is useful to get an overview of the resource usage between two instants for the entire system. At the time of this writing, however, it is somewhat oversized if the objective is simply to follow one add-on/webpage (as it always collects and processes data from all add-ons and webpage), or one process (as it always collects data from all processes). In addition, polling is not appropriate to generate performance alerts, as it needs to communicate with all processes, even if these processes are idle. This prevents the processes from sleeping, which is both bad for battery and for virtual memory usage.

2/ Events

For these reasons, we have developed a second, event-based API, which is expected to land on Firefox Nightly within a few days.

PerformanceWatcher.addPerformanceListener({addonId: “foo@bar”}, function(source, details) {
  // This callback is triggered whenever the add-on causes too many consecutive frames to be skipped.
  console.log(“Highest Jank (µs)”, details.highestJank);
});

This same API can be used to watch tabs, or to watch all add-ons or all tabs at once.

The implementation of this API is slightly more sophisticated, as we wish to avoid saturating API clients with alerts, in particular if some of these clients may themselves be causing jank:

  1. in each process, the Performance Stats Service collects all the data at the end of each event;
  2. if the execution duration of at least one group has exceeded some threshold (typically 64ms), we add it to the list of “performance alerts”, unless it is already in that list;
  3. performance alerts are collected after ~100ms – the timer is active only if at least one collection is needed;
  4. each performance alert for an add-on is then dispatched to any observer for this add-on and to the universal add-on observers (if any);
  5. each performance alert for a window is then dispatched to any observer for this window and to the universal window observers (if any);
  6. each child process buffers alerts, to minimise IPC cost, then propagates them to the parent process;
  7. the parent process collects all alerts and dispatches them to observers.

There are a few subtleties, as we may wish to register observers for add-ons that have not started yet (or even that have not been installed or have been uninstalled), and similarly for windows that are not open yet, or that have already been closed. Other subtleties ensure that, once again, most operations are constant-time, with the exception of dispatching to observers, which is linear in the number of alerts (deduplicated) + observers.

Future versions may extend this to watching specific Firefox features, or watching specific process, or the activity of the VM itself, and possibly more. We also plan to extend the API to improve the ability to detect whether the jank may actually be noticed by the user, or is somehow invisible, e.g. because the janky process was not visible at the time, or neither interactive nor animated.

To be continued

At this stage, I have presented most of the important design of the Performance Monitor. In a followup post, I intend to explain some of the work we have done to weed out false positives and show the user with user-actionable results.


https://dutherenverseauborddelatable.wordpress.com/2015/11/06/designing-the-firefox-performance-monitor-2-monitoring-add-ons-and-webpages/



Поиск сообщений в rss_planet_mozilla
Страницы: 472 ... 213 212 [211] 210 209 ..
.. 1 Календарь