-Поиск по дневнику

Поиск сообщений в rss_planet_mozilla

 -Подписка по e-mail

 

 -Постоянные читатели

 -Статистика

Статистика LiveInternet.ru: показано количество хитов и посетителей
Создан: 19.06.2007
Записей:
Комментариев:
Написано: 7

Planet Mozilla





Planet Mozilla - https://planet.mozilla.org/


Добавить любой RSS - источник (включая журнал LiveJournal) в свою ленту друзей вы можете на странице синдикации.

Исходная информация - http://planet.mozilla.org/.
Данный дневник сформирован из открытого RSS-источника по адресу http://planet.mozilla.org/rss20.xml, и дополняется в соответствии с дополнением данного источника. Он может не соответствовать содержимому оригинальной страницы. Трансляция создана автоматически по запросу читателей этой RSS ленты.
По всем вопросам о работе данного сервиса обращаться со страницы контактной информации.

[Обновить трансляцию]

Jeff Walden: Using Yubikeys with Fedora 24, for example for Github two-factor authentication

Суббота, 28 Мая 2016 г. 03:17 + в цитатник

My old laptop’s wifi went on the fritz, so I got a new Lenovo P50. Fedora 23 wouldn’t work with the Skylake architecture, so I had to jump headfirst into the Fedora 24 beta.

I’ve since hit one new issue: Yubikeys wouldn’t work for FIDO U2F authentication. Logging into a site using a Yubikey (inserting a Yubikey USB device and tapping the button when prompted) wouldn’t work. Attempting this on Github would display the error message, “Something went really wrong.” Nor would registering Yubikeys with sites work. On Github, attempting to register Yubikeys would give the error message, “This device cannot be registered.”

Interwebs sleuthing suggests that Yubikeys require special udev configuration to work on Linux. The problem is that udev doesn’t grant access to the Yubikey, so when the browser tries to access the key, things go Bad. A handful of resources pointed me toward a solution: tell udev to grant access to the device.

As root, go to the directory /etc/udev/rules.d. It contains files with names of the form *.rules, specifying rules for how to treat devices added and removed from the system. In that directory create the file 70-u2f.rules. Its contents should be those of 70-u2f.rules, from Yubico‘s libu2f-host repository. (Most of this file is just selecting various Yubikey devices to apply rules against. The important part of this file is the TAG+="uaccess" ending the various lines. This adds the “uaccess” tag to those devices; systemd-logind recognizes this tag and will grant access to the device to the current logged-in user.) Finally, run these two commands to refresh udev state:

udevadm control --reload
udevadm trigger

Yubikeys should now work for authentication.

These steps work for me, and they appear to me a sensible way to solve the problem. But I can’t say for sure that they’re the best way to solve it. (Nor am I sure why Fedora doesn’t handle this for me.) If anyone knows a better way, that doesn’t involve modifying the root file system, I’d love to hear it in comments.

http://whereswalden.com/2016/05/27/using-yubikeys-with-fedora-24-for-example-for-github-two-factor-authentication/


Mitchell Baker: What is the Role of Mozilla’s Executive Chair?

Суббота, 28 Мая 2016 г. 02:53 + в цитатник

What does the Executive Chair do at Mozilla? This question comes up frequently in conversations with people inside and outside of Mozilla. I want to answer that question and clearly define my role at Mozilla. The role of Executive Chair is unique and entails many different responsibilities. In particular at Mozilla, the Executive Chair is something more than the well understood role of “Chairman of the Board.” Because Mozilla is a very different sort of organization, the role of Executive Chair can be highly customized and personal. It is not generally an operational role although I may initiate and oversee some programs and initiatives.   

In this post I’ll outline the major areas I’m focused on. In subsequent posts I’ll go into more detail.

#1. Chair the Board

This portion of my role is similar to the more traditional Chair role. At Mozilla in this capacity I work on mission focus, governance, development and operation of the Board and the selection, support and evaluation of the most senior executives. In our case these are Mark Surman, Executive Director of Mozilla Foundation and Chris Beard, CEO of Mozilla Corporation. Less traditionally, this portion of my role  includes an ongoing activity I call “weaving all aspects of Mozilla into a whole.”  Mozilla is an organizationally complex mixture of communities, legal entities and programs.  Unifying this into “one Mozilla” is important.

#2. Represent Mozilla and our mission to Mozillians and the world

This is our version of the general “public spokesperson” role. In this part of my role, I speak at conferences, events and to the media to communicate Mozilla’s message. The goal of this portion of my role is to grow our influence and educate the world on issues that are important to us. This role is particularly important as we transform the company and the products we create, and as we refocus on entirely new challenges to the Open Web, interoperability, privacy and security.

#3. Reform the ways in which Mozilla values are reflected in our culture, management and leadership

This is the core of the work that is intensely customized for Mozilla. It is an area where Mozilla looks to me for leadership, and for which I have a unique vision. Mozilla’s core DNA is a mix of the open source/free software movement and the open architecture of the Internet. We were born as a radically open, radically participatory organization, unbound to traditional corporate structure. We played a role in bringing the “open” movement into mainstream consciousness. How does and how can this DNA manifest itself today? How do we better integrate this DNA into our current size?  Needless to say I work hand-in-hand with Chris Beard and Mark Surman in these areas.

#4. Strategically advise Mozilla’s technology and product direction

I’ve played this role for just over 20 years now, working closely with Mozilla’s technologists, individual contributors and leadership. I help us take new directions that might be difficult to chart. And in this role I can take risks that may make us uncomfortable in the shorter term but yield us great value over the longer term. By helping to point us towards the cutting edge of our technology, I reinforce the importance of change and adaptation in how we express our values.

#5. Help Mozilla ideas expand into new contexts

I’ve been working with Mark Surman on this topic since he joined us. We’ve expanded our mission and programs into digital literacy and education, journalism, science, women and technology and now the Mozilla Leadership Network. I have also championed Mozilla’s expanded efforts in public policy. I continue to look at how we can do more (and am always open to suggestions).

So these are the different parts of my role. Hopefully it provides you with a framework for understanding what I do and how I see myself interacting with Mozilla. I’m planning to write a series of posts describing the work underway in these areas. Please send comments or feedback or questions to “office of the chair mailing list.” And thanks for your interest in Mozilla.

http://blog.lizardwrangler.com/2016/05/27/what-is-the-role-of-mozillas-executive-chair/


Mike Hoye: Developers Are The New Mainframes

Пятница, 27 Мая 2016 г. 23:20 + в цитатник

This is another one of those rambling braindump posts. I may come back for some fierce editing later, but in the meantime, here’s some light weekend lunacy. Good luck getting through it. I believe in you.

I said that thing in the title with a straight face the other day, and not without reason. Maybe not good reasons? I like the word “reason”, I like the little sleight-of-hand it does by conflating “I did this on purpose” and “I thought about this beforehand”. It may not surprise you to learn that in my life at least those two things are not the same at all. In any case this post by Moxie Marlinspike was rattling around in the back of my head when somebody asked me on IRC why it’s hard-and-probably-impossible to make a change to a website in-browser and send a meaningful diff back to the site’s author, so I rambled for a bit and ended up here.

This is something I’ve asked for in the past myself: something like dom-diff and dom-merge, so site users could share changes back with creators. All the “web frameworks” I’ve ever seen are meant to make development easier and more manageable but at the end of the day what goes over the wire is a pile of minified angle-bracket hamburger that has almost no connection the site “at rest” on the filesystem. The only way share a usable change with a site author, if it can be done at all, is to stand up a containerized version of the entire site and edit that. This disconnect between the scale of the change and the work needed to make it is, to put it mildly, a huge barrier to somebody who wants to correct a typo, tweak a color or add some alt-text to an image.

I ranted about this for a while, about how JavaScript has made classic View Source obsolete and how even if you had dom-diff and dom-merge you’d need a carefully designed JS framework underneath designed specifically to support them, and how it makes me sad that I don’t have the skill set or free time to make that happen. But I think that if you dig a little deeper, there are some cold economics underneath that whole state of affairs that are worth thinking about.

I think that the basic problem here is the misconception that federation is a feature of distributed systems. I’m pretty confident that it’s not; specifically, I believe that federated systems are a byproduct of computational scarcity.

Building and deploying federated systems has a bunch of hard tradeoffs around development, control and speed of iteration that people are stuck with when computation is so expensive that no single organization can have or do enough of it to give a service global reach. Usenet, XMPP, email and so forth were products of this mainframe-and-minicomputer era; the Web is the last and best of them.

Protocol consensus is hard, but not as hard or expensive as a room full of $40,000 or $4,000,000 computers, so you do that work and accept the fact that what you gain in distributed stability you lose in iteration speed and design flexibility. The nature of those costs means the pressure to get it pretty close to right on the first try is very high, because real opportunities to revisit will be rare and costly. You’re fighting your own established success at that point, and nothing in tech has more inertia than a status quo whose supporters think is good enough. (See also: how IPV6 has been “right around the corner” for 20 years.)

But that’s just not true anymore. If you need a few thousand more CPUs, you twiddle the dials on your S3 page and go back to unified deployment, rapid experimental iteration and trying to stay ahead of everyone else who’s doing the same. That’s how WhatsApp can deploy end to end encryption with one software update, just like that. It’s how Facebook can update a billion users’ experiences whenever they feel like it, and presumably how Twitter does whatever the hell Twitter’s doing this week. They don’t ask permission or seek consensus because they don’t have to; they deploy, test and iterate.

So the work that used to enable, support and improve federated systems now mostly exists where domain-computation is still scarce and expensive: the development process itself. Specifically the inside of developers heads, developers who stubbornly and despite our best efforts remain expensive, high-maintenance and relatively low-bandwidth, with lots of context and application-reasoning locked up in their heads and poorly distributed.

Which is to say: developers are the new mainframes.

Right now great majority of what they’re “connected” to from a development-on-device perspective are de-facto dumb terminals. Apps, iPads, Android phones. Web pages you can’t meaningfully modify for values of “meaningful” that involve upstreaming a diff. From a development perspective those are the endpoints of one-way transmissions, and there’s no way to duplex that line to receive development-effort back.

So, if that’s the trend – that is, if in general centralized-then-federated systems get reconsolidated in socially-oriented verticals, (and that’s what issue trackers are when compared to mailing lists) – then development as a practice is floating around the late middle step, but development as an end product – via cheap CPU and hackable IoT devices – that’s just getting warmed up. The obvious Next Thing in that space will be a resurgence of something like the Web, made of little things that make little decisions – effectively distributing, commodifying and democratizing programming as a product, duplexing development across those newly commodified development-nodes.

That’s the real revolution that’s coming, not the thousand-dollar juicers or the bluetooth nosehair trimmers, but the mess of tiny hackable devices that start to talk to each other via decentralized, ultracommodified feedback loops. We’re missing a few key components – bug trackers aren’t quite source-code-managers or social-ey, IoT build tools aren’t one-click-to-deploy and so forth, but eventually there will be a single standard for how these things communicate and run despite everyone’s ongoing efforts to force users into the current and very-mainframey vendor lock-in, the same way there were a bunch of proprietary transport protocols before TCP/IP settled the issue. Your smarter long-game players will be the ones betting on JavaScript to come out on top there, though it’s possible there will be other contenders.

The next step will be the social one, though “tribal” might be a better way of putting it – the eventual recentralization of this web of thing-code into cultural-preference islands making choices about how they speak to the world around them and the world speaks back. Basically a hardware scripting site with a social aspect built in, communities and trusted sources building social/subscriber model out for IoT agency. What the Web became and is still in a lot of ways becoming as we figure the hard part – the people at scale part, out. The Web of How Stuff Works.

Anyway, if you want to know what the next 15-20 years will look like, that’s the broad strokes. Probably more like 8-12, on reflection. Stuff moves pretty quick these days, but like I said, building consensus is hard. The hard part is always people. This is one of the reasons I think Mozilla’s mission is only going to get more important for the foreseeable future; the Web was the last and best of the federated systems, worth fighting for on those grounds alone, and we’re nowhere close to done learning everything it’s got to teach us about ourselves, each other and what it’s possible for us to become. It might be the last truly open, participatory system we get, ever. Consensus is hard and maybe not necessary anymore, so if we can’t keep the Web and the lessons we’ve learned and can still learn from it alive long enough to birth its descendants, we may never get a chance to build another system like it.

[minor edits since first publication. -mhoye]

http://exple.tive.org/blarg/2016/05/27/developers-are-the-new-mainframes/


Mozilla WebDev Community: Django, Pipeline, and Gulp

Пятница, 27 Мая 2016 г. 22:45 + в цитатник

Bedrock, the code behind www.mozilla.org, is a very large Django project. It is mostly large due to the volume and diversity of independent pages it serves. These pages come with a surprising amount of static media (css, js, images, fonts, etc.). So, any system that we use to deal with said media should be efficient in order to keep our development servers fast.

We like django-pipeline for managing our static media in production. It does a great job of bundling, minifying, and compressing our css and js. When using it in a development environment however, it does not scale well. The issue is that it does not watch for changes to your files, so all it can do is copy them all from their source to the static directory on every page load. For a reasonable number of files this is probably fine, but as I said ours is not that. This is exasperated in slow I/O environments like Docker on a non-linux system (like OSX). We’ve not been able to setup an acceptable Docker-based local dev environment yet because it can literally take several minutes to render the home page.

Due to all of the issues noted above we’ve been looking for other ways of handling static media. We’ve considered a few times to move to a completely nodejs based system that would be completely independent of the Django side, and may still do that some day, but the problem has always been scope and impact. Again because the project is so large, making sweeping changes that affect workflow and all static files can both take a lot of time and be very disruptive when they land. So for a long time we figured we were stuck. But recently a conversation started in IRC about being able to just disable django-pipeline’s copying of files. If we could do that we could use gulp-watch to much more quickly and efficiently manage these files while being edited and still get the benefits of django-pipline for production. It turned out that someone else already had this idea and mostly we just needed to upgrade django-pipeline.

After that it was a simple matter of adding a task to our Gulpfile:

gulp.task('media:watch', function () {
    return gulp.src('./media/**/*')
        .pipe(watch('./media/**/*', {
            'verbose': true
        }))
        .pipe(gulp.dest('./static'));
});

But it was still a bit odd now having to have two shells open, one for the gulp task and another for the Django dev server. So we did a little more gulp magic and now have a single command to start up both the file watching and the Django server that combines the output of both in a single terminal.

gulp.task('serve:backend', function () {
    var devServerPort = process.env.PORT || 8000;
    process.env.PYTHONUNBUFFERED = 1;
    process.env.PYTHONDONTWRITEBITECODE = 1;
    spawn('python', ['manage.py', 'runserver', '0.0.0.0:' + devServerPort], {
        stdio: 'inherit'
    });
});

gulp.task('default', function() {
    gulp.start('serve:backend');
    gulp.start('media:watch');
});

You can see the full gulpfile.js in our repo on Github if you’d like. It’s a simple but very effective change and has made a very noticeable improvement in the speed of the local development server. And now we hope that we can finally complete and recommend Docker as the default setup for local development on bedrock.

https://blog.mozilla.org/webdev/2016/05/27/django-pipeline-and-gulp/


Yunier Jos'e Sosa V'azquez: Conoce los complementos destacados para mayo

Пятница, 27 Мая 2016 г. 21:53 + в цитатник

Si eres de lo que gusta de velar por su seguridad mientras navegas en Internet, este mes estar'as a gusto pues te taremos un complemento que bloquea los scripts de terceros vinculados al sitio que est'as visitando y te permite visualizar f'acilmente qui'enes son, y permitir o no su ejecuci'on. Por otra parte, si deseas descargar cosas para m'as tarde, aqu'i te presentamos la soluci'on. Tambi'en podr'as conocer de viejas funcionalidades de Firefox que regresan en forma de extensiones y del nuevo dise~no que muestra AMO.

AMO estrena nuevo dise~no

La galer'ia de complemento de Mozilla (AMO por sus siglas en ingl'es de Addons.Mozilla.Org) recientemente cambi'o su cara y ahora luce un nuevo y atractivo dise~no. Como podemos ver en las im'agenes que se muestran a continuaci'on, la nueva interfaz es m'as sencilla y limpia que la anterior. De esta forma, AMO renueva su imagen y ofrece a los usuarios mayor calidad y modernidad.

P'agina principal de AMO Viendo detalles de un complemento Navegando por una categor'ia

Elecci'on de mes: uBlock Origin de Raymond Hill.

Es un eficiente y ligero bloqueador de peticiones a base de filtros que se aplican al sitio actual. Adem'as, permite acelerar la carga de las p'aginas web y ver los sitios web enlazados desde un panel ubicado en la barra de herramientas de Firefox.

Interfaz principal de uBlock Origin uBlock Origin muestra los sitios webs bloqueados

Instalar uBlock Origin »

Tambi'en te recomendamos

Download Plan de Abraham, un cercano conocido de nosotros.

Programa descargas para que sean ejecutadas en determinadas horas donde el uso de la red sea menor.

Interfaz de Download Plan Opciones de Download Plan

Instalar Download Plan »

Emoji Keyboard de Harry N.

Permite tener f'acilmente a la vista los emoticons m'as empleados en la actualidad para enviar mensajes divertidos.

Panel de Emoji Keyboard Buscando un emoji

Instalar Emoji Keyboard »

Tab Groups de Quicksaver.

Permite organizar tus pesta~nas en diferentes grupos. Esta es la antigua funci'on Panorama de Firefox llevada a un complemento.

Visualizando los grupos de pesta~nas Buscando una pesta~na

Instalar Tab Groups »

Nomina tu complemento favorito

A nosotros nos encantar'ia que fueras parte del proceso de seleccionar los mejores complementos para Firefox y nos gustar'ia escucharte. ?No sabes c'omo? S'olo tienes que enviar un correo electr'onico a la direcci'on amo-featured@mozilla.org con el nombre del complemento o el archivo de instalaci'on y los miembros evaluar'an tu recomendaci'on.

http://firefoxmania.uci.cu/conoce-los-complementos-destacados-para-mayo/


Gervase Markham: Thank You For Trying, Switzerland

Пятница, 27 Мая 2016 г. 20:00 + в цитатник

Various bits of the TiSA (Trade in Services Agreement, yet another multilateral trade treaty) were leaked recently. On the very first page of General Provisions:

[CH propose; AU/CA/CL/TW/CO/EU/IL/JP/MX/NZ/PE oppose; MU/PK considering:
Without prejudice to the policy objectives and legislation of the Parties in areas such as the protection of intellectual property, the protection of privacy and of the confidentiality of personal and commercial data, the protection of consumers and the protection and promotion of the diversity of cultural expressions (including through public funding and assistance) and fiscal measures.]

So the Swiss said “Hey, wouldn’t it be good if we had a thing at the start that said that this treaty doesn’t stop governments protecting privacy, the confidentiality of data, consumer rights, cultural diversity or other important things like that? Wouldn’t that be neat?”

And Australia, Canada, Chile, Taiwan, Colombia, the EU, Israel, Japan, Mexico, New Zealand and Peru all said “Er, no. We want this agreement to be capable of preventing us from protecting those things, thanks. Where it speaks, it should be more important than the domestic law enacted by your elected representatives.”

Seems like that tells you a lot of what you need to know about the way such treaties are assembled. At least Mauritius and Pakistan are still thinking about it… Sheesh.

http://feedproxy.google.com/~r/HackingForChrist/~3/E_c3Fcya45g/


Air Mozilla: Foundation Demos May 27 2016

Пятница, 27 Мая 2016 г. 20:00 + в цитатник

Niko Matsakis: The 'Tootsie Pop' model for unsafe code

Пятница, 27 Мая 2016 г. 19:12 + в цитатник

In my previous post, I spent some time talking about the idea of unsafe abstractions. At the end of the post, I mentioned that Rust does not really have any kind of official guidelines for what kind of code is legal in an unsafe block and what is not.What this means in practice is that people wind up writing what seems reasonable and checking it against what the compiler does today. This is of course a risky proposition since it means that if we start doing more optimization in the compiler, we may well wind up breaking unsafe code (the code would still compile; it would just not execute like it used to).

Now, of course, merely having published guidelines doesn’t entirely change that dynamic. It does allow us to assign blame to the unsafe code that took actions it wasn’t supposed to take. But at the end of the day we’re still causing crashes, so that’s bad.

This is partly why I have advocated that I want us to try and arrive at guidelines which are human friendly. Even if we have published guidelines, I don’t expect most people to read them in practice. And fewer still will read past the introduction. So we had better be sure that reasonable code works by default.

Interestingly, there is something of a tension here: the more unsafe code we allow, the less the compiler can optimize. This is because it would have to be conservative about possible aliasing and (for example) avoid reordering statements. We’ll see some examples of this as we go.

Still, to some extent, I think it’s possible for us to have our cake and eat it too. In this blog post, I outline a proposal to leverage unsafe abstaction boundaries to inform the compiler where it can be aggressive and where it must be conservative. The heart of the proposal is the intution that:

  • when you enter the unsafe boundary, you can rely that the Rust type system invariants hold;
  • when you exit the unsafe boundary, you must ensure that the Rust type system invariants are restored;
  • in the interim, you can break a lot of rules (though not all the rules).

I call this the Tootsie Pop model: the idea is that an unsafe abstraction is kind of like a Tootsie Pop. There is a gooey candy interior, where the rules are squishy and the compiler must be conservative when optimizing. This is separated from the outside world by a hard candy exterior, which is the interface, and where the rules get stricter. Outside of the pop itself lies the safe code, where the compiler ensures that all rules are met, and where we can optimize aggressively.

One can also compare the approach to what would happen when writing a C plugin for a Ruby interpreter. In that case, your plugin can assume that the inputs are all valid Ruby objects, and it must produce valid Ruby objects as its output, but internally it can cut corners and use C pointers and other such things.

In this post, I will elaborate a bit more on the model, and in particular cover some example problem cases and talk about the grey areas that still need to be hammered out.

How do you define an unsafe boundary?

My initial proposal is that we should define an unsafe boundary as being a module that unsafe code somewhere inside of it. So, for example, the module that contains split_at_mut, which we have seen earlier is a fn defined with unsafe code, would form an unsafety boundary. Public functions in this module would therefore be entry points into the unsafe boundary; returning from such a function, or issuing a callback via a closure or trait method, would be an exit point.

Initially when considering this proposal, I wanted to use a an unsafe boundary defined at the function granularity. So any function which contained an unsafe block but which did not contain unsafe in its signature would be considered the start of an unsafe boundary; and any unsafe fn would be a part of its callers boundary (note that its caller must contain an unsafe block). This would mean that e.g. split_at_mut is its own unsafe boundary. However, I have come to think that this definition is too precise and could cause problems in practice – we’ll see some examples below. Therefore, I have loosened it.

Ultimately I think that deciding where to draw the unsafe boundary is still somewhat of an open question. Even using the module barrier means that some kinds of refactorings that might seem innocent (migrating code between modules, specifically) can change code from legal to illegal. I will discuss various alternatives later on.

Permissions granted/required at the unsafe boundary

In the model I am proposing, most of your reasoning happens as you cross into or out of an unsafe abstraction. When you enter into an unsafe abstraction – for example, by calling a method like split_at_mut, which is not declared as unsafe but uses unsafe code internally – you implicitly provide that function with certain permissions. These permissions are derived from the types of the function’s arguments and the rules of the Rust type system. In the case of split_at_mut, there are two arguments:

  • The slice self that is being split, of type &'a mut [T]; and,
  • the midpoint mid at which to perform the split, of type usize.

Based on these types, the split_at_mut method can assume that the variable self refers to a suitably initialized slice of values of type T. That reference is valid for the lifetime 'a, which represents some span of execution time that encloses at least the current call to split_at_mut. Similarly, the argument mid will be an unsigned integer of suitable size.

At this point we are within the unsafe abstraction. It is now free to do more-or-less whatever it likes, so long as all the actions it takes fall within the initial set of permissions. More on this below.

Finally, when you exit from the unsafe boundary, you must ensure that you have restored whatever invariants and permissions the Rust type system requires. These are typically going to be derived from the types of the function’s outputs, such as its return type. In the case of split_at_mut, the return type is (&mut [T], &mut [T]), so this implies that you will return a tuple of slices. Since those slices are both active at the same time, they must (by the rules of Rust’s type system) refer to disjoint memory.

Specifying the permissions

In this post, I am not trying to define the complete set of permissions. We have a reasonably good but not formalized notion of what these permissions are. Ralf Jung and Derek Dryer have been working on making that model more precise as part of the Rust Belt project. I think writing up those rules in one central place would obviously be a big part of elaboring on the model I am sketching out here.

If you are writing safe code, the type system will ensure that you never do anything that exceeds the permissions granted to you. But if you dip into unsafe code, then you take on the responsibility for verifying that you obey the given permissions. Either way, the set of permissions remain the same.

Permissons on functions declared as unsafe

If a function is declared as unsafe, then its permissions are not defined by the type system, but rather in comments and documentation. This is because the unsafe keyword is a warning that the function arguments may have additional requirements of its caller – or may return values that don’t meet the full requirements of the Rust type system.

Optimizations within an unsafe boundary

So far I’ve primarily talked about what happens when you cross an unsafe boundary, but I’ve not talked much about what you can do within an unsafe boundary. Roughly speaking, the answer that I propose is: whatever you like, so long as you don’t exceed the initial set of permissions you were given.

What this means in practice is that when the compiler is optimizing code that originates inside an unsafe boundary, it will make pessimistic assumptions about aliasing. This is effectively what C compilers do today (except they sometimes employ type-based alias analysis; we would not).

As a simple example: in safe code, if you have two distinct variables that are both of type &mut T, the compiler would assume that they represent disjoint memory. This might allow it, for example, to re-order reads/writes or re-use values that have been read if it does not see an intervening write. But if those same two variables appear inside of an unsafe boundary, the compiler would not make that assumption when optimizing. If that was too hand-wavy for you, don’t worry, we’ll spell out these examples and others in the next section.

Examples

In this section I want to walk through some examples. Each one contains unsafe code doing something potentially dubious. In each case, I will do the following:

  1. walk through the example and describe the dubious thing;
  2. describe what my proposed rules would do;
  3. describe some other rules one might imagine and what their repercussions might be.

By the way, I have been collecting these sorts of examples in a repository, and am very interested in seeing more such dubious cases which might offer insight into other tricky situations. The names of the sections below reflect the names of the files in that repository.

split-at-mut-via-duplication

Let’s start with a familiar example. This is a variant of the familiar split_at_mut method that I covered in the previous post:

1
2
3
4
5
6
7
8
impl [T] {
    pub fn split_at_mut(&mut self, mid: usize) -> (&mut [T], &mut [T]) {
        let copy: &mut [T] = unsafe { &mut *(self as *mut _) };
        let left = &mut self[0..mid];
        let right = &mut copy[mid..];
        (left, right)
    }
}

This version works differently from the ones I showed before. It doesn’t use raw pointers. Instead, it cheats the compiler by duplicating self via a cast to *mut. This means that both self and copy are &mut [T] slices pointing at the same memory, at the same time. In ordinary, safe Rust, this is impossible, but using unsafe code, we can make it happen.

The rest of the function looks almost the same as our original attempt at a safe implementation (also in the previous post). The only difference now is that, in defining right, it uses copy[mid..] instead of self[mid..]. The compiler accepts this because it assumes that copy and self, since they are both simultaneously valid, must be disjoint (remember that, in unsafe code, the borrow checker still enforces its rules on safe typess, it’s just that we can use tricks like raw pointers or transmutes to sidestep them).

Why am I showing you this? The key question here is whether the optimizer can trust Rust types within an unsafe boundary. After all, this code is only accepted because the borrowck thinks (incorrectly) that self and copy are disjoint; if the optimizer were to think the same thing, that could lead to bad optimizations.

My belief is that this program ought to be legal. One reason is just that, when I first implemented split_at_mut, it’s the most natural thing that I thought to write. And hence I suspect that many others would write unsafe code of this kind.

However, to put this in terms of the model, the idea is that the unsafe boundary here would be the module containing split_at_mut. Thus the dubious aliasing between left and right occurs within this boundary. In general, my belief is that whenever we are inside the boundary we cannot fully trust the types that we see. We can only assume that the user is supplying the types that seem most appropriate to them, not necessarily that they are accounting for the full implications of those types under the normal Rust rules. When optimizing, then, the compiler will not assume that the normal Rust type rules apply – effectively, it will treat &mut references the same way it might treat a *mut or *const pointer.

(I have to work a bit more at understanding LLVM’s annotations, but I think that we can model this using the aliasing metadata that LLVM provides. More on that later.)

Alternative models. Naturally alternative models might consider this code illegal. They would require that one use raw pointers, as the current implementation does, for any pointer that does not necessarily obey Rust’s memory model.

(Note that this raises another interesting question, though, about what the legal aliasing is between (say) a &mut and a *mut that are actively in use – after all, an &mut is supposed to be unique, but does that uniqueness cover raw pointers?)

refcell-ref

The borrow() method on the type RefCell employs a helper type that returns a value of a helper type called Ref:

1
2
3
4
pub struct Ref<'b, T: ?Sized + 'b> {
    value: &'b T,
    borrow: BorrowRef<'b>,
}

Here the value field is a reference to the interior of the RefCell, and the borrow is a value which, once dropped, will cause the lock on the RefCell to be released. This is important because it means that once borrow is dropped, value can no longer safely be used. (You could imagine the helper type MutexGuard employing a similar pattern, though actually it works ever so slightly differently for whatever reason.)

This is another example of unsafe code is using the Rust types in a creative way. In particular, the type &'b T is supposed to mean: a reference that can be safely used right up until the end of 'b (and whose referent will not be mutated). However, in this case, the actual meaning is until the end of `‘b` or until `borrow` is dropped, whichever comes first.

So let’s consider some imaginary method defined on Ref, copy_drop(), which works when T == u32. It would copy the value and then drop the borrow to release the lock.

1
2
3
4
5
6
7
8
use std::mem;
impl<'b> Ref<'b, u32> {
    pub fn copy_drop(self) -> u32 {
        let t = *self.value; // copy contents of `self.value` into `t`
        mem::drop(self.borrow); // release the lock
        t // return what we read before
    }
}

Note that there is no unsafe code in this function at all. I claim then that the Rust compiler would, ideally, be within its rights to rearrange this code and to delay the load of self.value to occur later, sort of like this:

1
2
3
mem::drop(self.borrow); // release the lock
let t = *self.value; // copy contents of `self.value` into `t`
t // return what we read before

This might seem surprising, but the idea here is that the type of self.value is &'b u32, which is supposed to mean a reference valid for all of 'b. Moreover, the lifetime 'b encloses the entire call to copy_drop. Therefore, the compiler would be free to say well, maybe I can save a register if I move this load down.

However, I think that reordering this code would be an invalid optimization. Logically, as soon as self.borrow is dropped, *self.value becomes inaccessible – if you imagine that this pattern were being used for a mutex, you can see why: another thread might acquire the lock!

Note that because these fields are private, this kind of problem can only arise for the methods defined on Ref itself. The public cannot gain access to the raw self.value reference. They must go through the deref trait, which returns a reference for some shorter lifetime 'r, and that lifetime 'r always ends before the ref is dropped. So if you were to try and write the same copy_drop routine from the outside, there would be no problem:

1
2
3
4
let some_ref: Ref<u32> = ref_cell.borrow();
let t = *some_ref;
mem::drop(some_ref);
use(t);

In particular, the let t = *some_ref desugars to something like:

1
2
3
4
let t = {
    let ptr: &u32 = Deref::deref(&some_ref);
    *ptr
};

Here the lifetime of ptr is just going to be that little enclosing block there.

Why am I showing you this? This example illustrates that, in the presence of unsafe code, the unsafe keyword itself is not necessarily a reliable indicator to where funny business could occur. Ultimately, I think what’s important is the unsafe abstraction barrier.

My belief is that this program ought to be legal. Frankly, to me, this code looks entirely reasonable, but also it’s the kind of code I expect people will write (after all, we wrote it). Examples like this are why I chose to extend the unsafe boundary to enclose the entire module that uses the unsafe keyword, rather than having it be at the fn granularity – because there can be functions that, in fact, do unsafe things where the full limitations on ordering and so forth are not apparent, but which do not directly involve unsafe code. Another classic example is modifying the length or capacity fields on a vector.

Now, I chose to extend to the enclosing, module because it corresponds to the privacy boundary, and there can be no unsafe abstraction barrier without privacy. But I’ll explain below why this is not a perfect choice and we might consider others.

usize-transfer

Here we have a trio of three functions. These functions collaborate to hide a reference in a usize and then later dereference it:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
// Cast the reference `x` into a `usize`
fn escape_as_usize(x: &i32) -> usize {
    // interestingly, this cast is currently legal in safe code,
    // which is a mite unfortunate, but doesn't really affect
    // the example
    x as *const _ as usize
}

// Cast `x` back into a pointer and dereference it 
fn consume_from_usize(x: usize) -> i32 {
    let y: &i32 = unsafe { &*(x as *const i32) };
    *y
}

pub fn entry_point() {
    let x: i32 = 2;
    let p: usize = escape_as_usize(&x);

    // (*) At this point, `p` is in fact a "pointer" to `x`, but it
    // doesn't look like it!

    println!("{}", consume_from_usize(p));
}

The key point in this example is marked with a (*). At that point, we have effected created a pointer to x and stored it in p, but the type of p does not reflect that (it just says it’s a pointer-sized integer). Note also that entry_point does not itself contain unsafe code (further evidence that private helper functions can easily cause unsafe reasoning to spread beyond the border of a single fn). So the compiler might assume that the stack slot x is dead and reuse the memory, or something like that.

There are a number of ways that this code might be made less shady. escape_as_usize might have, for example, returned a *const i32 instead of usize. In that case, consume_from_usize would look like:

1
fn consume_from_usize(x: *const i32) -> i32 { ... }

This itself raises a kind of interesting question though. If a function is not declared as unsafe, and it is given a *const i32 argument, can it dereference that pointer? Ordinarily, the answer would clearly be no. It has no idea what the provenance of that pointer is (and if you think back to the idea of permissions that are granted and expected by the Rust type system, the type system does not guarantee you that a *const can be dereferenced). So effectively there is no difference, in terms of the public permissions, between x: usize and x: *const i32. Really I think the best way to structure this code would have been to declare consume_from_usize() as unsafe, which would have served to declare to its callers that it has extra requirements regarding its argument x (namely, that it must be a pointer that can be safely dereferenced).

Now, if consume_from_usize() were a public function, then not having an unsafe keyword would almost certainly be flat out wrong. There is nothing that stops perfectly safe callers from calling it with any old integer that they want; even if the signature were changed to take *const u32, the same is basically true. But consume_from_usize() is not public: it’s private, and that perhaps makes a difference.

It often happens, as we’ve seen in the other examples, that people cut corners within the unsafe boundary and declare private helpers as safe that are in fact assuming quite a bit beyond the normal Rust type rules.

Why am I showing you this? This is a good example for playing with the concept of an unsafe boundary. By moving these functions about, you can easily create unsafety, as they must all three be contained within the same unsafe boundary to be legal (if indeed they are legal at all). Consider these variations:

Private helper module.

1
2
3
4
5
6
7
8
mod helpers {
    pub fn escape_as_usize(x: &i32) -> usize { ... }
    pub fn consume_from_usize(x: usize) -> i32 { ... }
}

pub fn entry_point() {
    ... // calls now written as `helpers::escape_as_usize` etc
}

Private helper module, but restriced scope to an outer scope.

1
2
3
4
5
6
7
8
mod helpers {
    pub(super) fn escape_as_usize(x: &i32) -> usize { ... }
    pub(super) fn consume_from_usize(x: usize) -> i32 { ... }
}

pub fn entry_point() {
    ... // calls now written as `helpers::escape_as_usize` etc
}

Public functions, but restricted to an outer scope.

1
2
3
4
5
6
7
pub mod some_bigger_abstraction {
    mod helpers {
        pub(super) fn escape_as_usize(x: &i32) -> usize { ... }
        pub(super) fn consume_from_usize(x: usize) -> i32 { ... }
        pub(super) fn entry_point() { ... }
    }
}

Public functions, but de facto restricted to an outer scope.

1
2
3
4
5
6
7
8
9
pub mod some_bigger_abstraction {
    mod helpers {
        pub fn escape_as_usize(x: &i32) -> usize { ... }
        pub fn consume_from_usize(x: usize) -> i32 { ... }
        pub fn entry_point() { ... }
    }

    // no `pub use`, so in fact they are not accessible
}

Just plain public.

1
2
3
pub fn escape_as_usize(x: &i32) -> usize { ... }
pub fn consume_from_usize(x: usize) -> i32 { ... }
pub fn entry_point() { }

Different crates.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
// crate A:
pub fn escape_as_usize(x: &i32) -> usize { ... }
// crate B:
pub fn consume_from_usize(x: usize) -> i32 { ... }
// crate C:
extern crate a;
extern crate b;
pub fn entry_point() {
    ...
    let p = a::escape_as_usize(&x)
    ...
    b::consume_from_usize(p)
    ...
}

My belief is that some of these variations ought to be legal. The current model as I described it here would accept the original variation (where everything is in one module) but reject all other variations (that is, they would compile, but result in undefined behavior). I am not sure this is right: I think that at least the private helper module variations seems maybe reasonable.

Note that I think any or all of these variations should be fine with appropriate use of the unsafe keyword. If the helper functions were declared as unsafe, then I think they could live anywhere. (This is actually an interesting point that deserves to be drilled into a bit more, since it raises the question of how distinct unsafe boundaries interact; I tend to think of there as just being safe and unsafe code, full stop, and hence any time that unsafe code in one module invokes unsafe code in another, we can assume they are part of the same boundary and hence that we have to be conservative.)

On refactorings, harmless and otherwise

One interesting thing to think about with an kind of memory model or other guidelines is what sorts of refactorings people can safely perform. For example, under this model, manually inlining a fn body is always safe, so long as you do so within an unsafe abstraction. Inlining a function from inside an abstraction into the outside is usually safe, but not necessarily – the reason it is usually safe is that most such functions have unsafe blocks, and so by manually inlining, you will wind up changing the caller from a safe function into one that is part of the unsafe abstraction.

(Grouping items and functions into modules is another example that may or may not be safe, depending on how we chose to draw the boundary lines.)

EDIT: To clarify a confusion I have seen in a few places. Here I am talking about inlining by the user. Inlining by the compiler is different. In that case, when we inline, we would track the provenance of each instruction, and in particular we would track whether the instruction originated from unsafe code. (As I understand it, LLVM already does this with its aliased sets, because it is needed for handling C99 restrict.) This means that when we decide e.g. if two loads may alias, if one (or both) of those loads originated in unsafe code, then the answer would be different than if they did not.

Impact of this proposal and mapping it to LLVM

I suspect that we are doing some optimizations now that would not be legal under this proposal, though probably not that many – we haven’t gone very far in terms of translating Rust’s invariants to LLVM’s alias analysis metadata. Note though that in general this proposal is very optimization friendly: all safe code can be fully optimized. Unsafe code falls back to more C-like reasoning, where one must be conservative about potential aliasing (note that I do not want to employ any type-based alias analysis, though).

I expect we may want to add some annotations that unsafe code can use to recover optimizations. For example, perhaps something analogous to the restrict keyword in C, to declare that pointers are unaliased, or some way to say that an unsafe fn (or module) nonetheless ensures that all safe Rust types meet their full requirements.

One of the next steps for me personally in exploring this model is to try and map out (a) precisely what we do today and (b) how I would express what I want in LLVM’s terms. It’s not the best formalization, but it’s a concrete starting point at least!

Tweaking the concept of a boundary

As the final example showed, a module boundary is not clearly right. In particular, the idea of using a module is that it aligned to privacy, but by that definition it should probably include submodules (that is, any module where an unsafe keyword appears either in the module or in some parent of the module is considered to be an unsafe boundary module).

Conclusion

Here I presented a high-level proposal for how I think a Rust memory model ought to work. Clearly this doesn’t resemble a formal memory model and there are tons of details to work out. Rather, it’s a guiding principle: be aggressive outside of unsafe abstractions and conservative inside.

I have two major concerns:

  • First, what is the impact on execution time? I think this needs to be investigated, but ultimately I am sure we can overcome any deficit by allowing unsafe code authors to opt back in to more aggressive optimization, which feels like a good tradeoff.
  • Second, what’s the best way to draw the optimization boundary? Can we make it more explicit?

In particular, the module-based rule that I proposed for the unsafe boundary is ultimately a kind of heuristic that makes an educated guess as to where the unsafe boundary lies. Certainly the boundary must be aligned with modules, but as the last example showed, there may be a lot of ways to set thigns up that seem reasonable. It might be nicer if we could have a way to declare that boundary affirmatively. I’m not entirely sure that this looks like. But if we did add some way, we might then say that if you use the older unsafe keyword – where the boundary is implicit – we’ll just declare the whole crate as being an unsafe boundary. This likely won’t break any code (though of course I mentioned the different crates variation above…), but it would provide an incentive to use the more explicit form.

For questions or discussion, please see this thread on the Rust internals forum.

Edit log

Some of the examples of dubious unsafe code originally used transmute and transmute_copy. I was asked to change them because transmute_copy really is exceptionally unsafe, even for unsafe code (type inference can make it go wildly awry from what you expected), and so we didn’t want to tempt anyone into copy-and-pasting them. For the record: don’t copy and paste the unsafe code I labeled as dubious – it is indeed dubious and may not turn out to be legal! :)

http://smallcultfollowing.com/babysteps/blog/2016/05/27/the-tootsie-pop-model-for-unsafe-code/


Adam Stevenson: Back in Compat

Пятница, 27 Мая 2016 г. 18:00 + в цитатник

In 2014 I had the opportunity to join the Web Compatibility team on a six month contract. This mission based team and community, opened me up to a way of working I had never experienced before. It had a major impact on how I view my work/life balance and overall happiness. When the possibility of returning came up this year, I was dead in the middle of a corporate job that had me flying around Canada and the U.S every week. When I’d return home for the weekend I was exhausted, with little time for family, friends and hobbies. I knew that I needed to get back to a place where I could be excited to work and able to find balance. Back in October of 2015, I attended a conference in Austin, TX for a week. Before leaving Austin I met up with Mike Taylor for some breakfast tacos and caught up with him. At some point I remember saying how much I enjoyed working at Mozilla and would love to return one day. Four months later when a position on the Web Compat team opened up, Mike reached out to see if I was still interested - I was pumped!

The on-boarding process this time around was pretty sweet. In May I attended the new hire on-boarding week in Mountain View. The group must have been 25+ people in size, all from different places in the world and within Mozilla. Each day we’d go through team building activities and get to hear from the leaders of each department. It was so much fun and we learned an incredible amount about the past, present and future of Mozilla. For sure, it was the best on-boarding experience I’ve been through. One topic that came up during our break time was imposter syndrome. Mozilla is full of talented people, which can be intimidating for new hires. I’m thankful that someone opened up about the topic as the conversations were reassuring.

In the last two years things have changed a bit at Mozilla and in Web Compat. We’re no longer supporting Firefox OS, which was previously our main focus. Firefox is now on iOS, unsurprisingly we don’t see many compatibility issues using webkit. Firefox on desktop is having more issues though, there are sites using only webkit prefixes, various video problems and even some tracking protection related bugs. Our website webcompat.com has come a long way and now we see anonymous bugs reported from regular web users.

It feels great to be back, I’m excited about the work ahead.

http://bornawesome.com/adam/index.php?id=back-in-compat


William Lachance: Quarter of Contribution: June / July 2016 edition

Пятница, 27 Мая 2016 г. 17:51 + в цитатник

Just wanted to announce that, once again, my team (Mozilla Engineering Productivity) is just about to start running another quarter of contribution — a great opportunity for newer community members to dive deep on some of the projects we’re working on, brush up on their programming and problem solving skills, and work with experienced mentors. You can find more information on this program here.

I’ve found this program to be a really great experience on both sides — it’s an opportunity for contributors to really go beyond the “good first bug” style of patches to having a really substantial impact on some of the projects that we’re working on while gaining lots of software development skills that are useful in the real world.

Once again, I’m going to be mentoring one or two people on the Perfherder project, a tool we use to measure and sheriff Firefox performance. If you’re inclined to work on some really interesting data analysis and user interface problems in Python and JavaScript, please have a look at the project page and get in touch. :)

https://wlach.github.io/blog/2016/05/quarter-of-contribution-june-july-2016-edition/?utm_source=Mozilla&utm_medium=RSS


QMO: Firefox 48.0 Aurora Testday, June 3rd

Пятница, 27 Мая 2016 г. 14:47 + в цитатник

Hello Mozillians,

We are happy to let you know that Friday, June 3rd, we are organizing Firefox 48.0 Aurora Testday. We’ll be focusing our testing on the following features: New Awesomebar, Windows Child Mode and  APZ. Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better! See you on Friday!

https://quality.mozilla.org/2016/05/firefox-48-0-aurora-testday-june-3rd/


Daniel Stenberg: a new curl logo

Пятница, 27 Мая 2016 г. 12:28 + в цитатник

The original logo for the curl project was created in 1998. I can’t even recall who made the original one. When we started version controlling the web site contents in June 2001 it had already been revised slightly and it looked like this

the original logo

I subsequently did the slightly refreshed version that’s been visible on the site since early 2003 which sort of emphasized the look by squeezing the letters together. The general “cURL” look was obviously kept.

curl-refined

It was never considered a good-looking logo by anyone. People mostly at best thought it was cute because it looked goofy and it helped provide the web site with a sort of retro feeling. Something that was never intended.

(As you can see I shortened the tag line slightly between those two versions.)

Over the years I’ve tried to refresh the logo and even gently tried to ask around for help to get it revamped, but it was never done and my own futile attempts didn’t end up in any improvements.

The funny casing of the word cURL might’ve contributed to the fact that people considered it ugly. It came with the original naming of the project that was focused on the fact that we work with URLs. A client for URLs could be cURL. Or if you’d pronounce the c as “see”, you’d “see URL”. The casing is really only used in the logo these days. We just call it curl now. The project is curl, the command line tool is curl and even I have started to give up somewhat and have actually referred to libcurl as the curl library several times. “curl” with no upper case being the common denominator.

When I got this offer to refresh the logo this time, I was a bit skeptical since attempts had failed before, but what the heck, we had nothing to lose so sure, please help us!

We started out with their own free-style suggestions on a new logo. That proved these guys are good and have an esthetic sense for this. To me that first set also quite clearly showed me that the funny casing have to go. A lower case version seemed calmer, more modern and more in line with our current view of the naming.

As an example from their first set, a clean and stylish version that tried to use the letter c as a sort of symbol. Unfortunately it makes you mostly read “URL”.

experimental curl logo

As step two, I also suggested we’d try to work with :// (colon slash slash) somehow as a symbol, since after all the :// letter sequence is commonly used in all the URL formats that curl supports. And it is sort of a global symbol for URLs when you start to think about it. Made sense to me.

Their second set of logo versions were good. Really good. But they mostly focused on having the :// symbol to the left of ‘curl’ and that made it look a bit weird. Like “:// curl” – which looks so strange when you’re used to seeing URLs. They also did some attempts with writing it “c:URL” and having the // as part of the U letter like “c://RL” but those versions felt too crazy and ended more funny-looking than cool.

An example from the second set with a colon-slash-slash symbol on the left side:

experimental curl logo4

The third set of logos were then made for us with various kinds of :// variations put on the right side of the curl, the curl in lowercase only.

There, among that third set of suggestions, there was one that was a notch better than the rest. The new curl logo. We made it.

I offer you: the curl logo of 2016.

good_curl_logo

Used in a real-life like situation:

logo-on-box

and the plan is as this image shows, to be able to use the colon-slash-slash symbol stand-alone:

curl-symbolThe new curl logo is made by Soft Dreams. Thanks a lot for this stellar work!

https://daniel.haxx.se/blog/2016/05/27/a-new-curl-logo/


Eitan Isaacson: Google Inbox Notifications

Четверг, 26 Мая 2016 г. 23:01 + в цитатник

Do you use Google Inbox? Do you miss getting notifications on your desktop when new mail arrives? In Gmail you could opt in to get desktop notifications, and the tab title reflects how many unread messages you have.

icon.svg.png

I made a Firefox addon that brings that functionality to Google Inbox. It gives you a notification when new mail arrives and updates the pages title with the unread mail count. You can get it here!


http://blog.monotonous.org/2016/05/26/google-inbox-notifications/


Support.Mozilla.Org: What’s Up with SUMO – 26th May

Четверг, 26 Мая 2016 г. 22:12 + в цитатник

Hello, SUMO Nation!

We’ve been through a few holidays here and there, so there’s not a lot to report this week. We hope you’ll enjoy this much lighter-than-usual set of updates :-)

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Most recent SUMO Community meeting

The next SUMO Community meeting

  • …is happening on WEDNESDAY the 1st of June – join us!
  • Reminder: if you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.

Community

Social

Support Forum

Knowledge Base & L10n

Firefox

This is it for the diet version of WUWS. We’re going “lite” for this week, to keep your minds slim for the summer ;-) See you around SUMO!

https://blog.mozilla.org/sumo/2016/05/26/whats-up-with-sumo-26th-may/


Air Mozilla: Twelve Technology Forces Shaping the Next 30 Years: Interview with Kevin Kelly

Четверг, 26 Мая 2016 г. 20:30 + в цитатник

Twelve Technology Forces Shaping the Next 30 Years: Interview with Kevin Kelly Much of what will happen in the next thirty years is inevitable, driven by technological trends are already in motion. Wired founder Kevin Kelly has...

https://air.mozilla.org/twelve-technology-forces-shaping-the-next-30-years-interview-with-kevin-kelly/


Mozilla Reps Community: Rep of the Month – February 2016

Четверг, 26 Мая 2016 г. 19:36 + в цитатник
Abdelrahman Samy started his journey as a Firefox Student Ambassador in Egypt. As Club lead of the AAST club, he is involved in the day to day awareness raising of Mozilla’s mission at the Alexandria University. He is also engaged in localization and helped with l10n for Firefox OS.

Further he is one of the most active Core members of the Egypt community. Recently he coordinated Club launches in Egypt (MIS Club for example). With his enthusiasm there are two more Club launches he is coordinating at the end of April / start of May.

Don’t forget to congratulate him on Discourse!

https://blog.mozilla.org/mozillareps/2016/05/26/rep-of-the-month-february-2016/


Karl Dubost: How CSS width is computed in Gecko?

Четверг, 26 Мая 2016 г. 09:22 + в цитатник

Yesterday, I have written about the strange value of the CSS width when set with a percentage. One Japanese night later, Boris Zbarsky sent me a private email explaining how CSS widths are computed in Gecko. These short emails are a gem of pleasure to read and insights into Gecko. I should almost open a new category here called "Santa Boris". I love them. Thanks.

So what did I learn with Boris? Here an extended version of Boris' email with C++ code. (I'm out of my comfort zone so if you see glaring mistakes, tell me I will happily fix them.)

Yesterday's Summary

In yesterday's article, the following CSS rule was applied on the child of a 360px parent element.

.foo {
    width: 49.99999%;
    border-right: 1px solid #CCC;
}
  • The 49.99999 % of 360 px in Gecko
  • is not 179.999964 px (as you could mathematically expect)
  • but 179.98333740234375 px (rounded to 179.983 px in the devtools UI).

Gecko storage of CSS

In Boris' words:

Gecko doesn't store CSS stuff as floating-point numbers. It stores them as integers, with the unit being 1/60 of a px (so that common fractions like 1/2, 1/3, 1/4, 1/5 are exactly representable).

Or said it another way 1 px = 60 (in Gecko). The 60 constant is defined in AppUnits.h (Brian Birtles helped me find it).

inline int32_t AppUnitsPerCSSPixel() { return 60; }

which in return is called a lot in Units.h.

Specifically in ToAppUnits()

    static nsRect ToAppUnits(const CSSRect& aRect) {
    return nsRect(NSToCoordRoundWithClamp(aRect.x * float(AppUnitsPerCSSPixel())),
                  NSToCoordRoundWithClamp(aRect.y * float(AppUnitsPerCSSPixel())),
                  NSToCoordRoundWithClamp(aRect.width * float(AppUnitsPerCSSPixel())),
                  NSToCoordRoundWithClamp(aRect.height * float(AppUnitsPerCSSPixel())));
  }

Let's continue with Boris' explanation broken down in steps. So instead of 360 px, we have an integer which is 360

  1. 49.99999% * 360
  2. 0.4999999 * (360 * 60)
  3. 0.4999999 * 21600
  4. 10799.99784

but as you can see above when computing the values, it uses a function NSToCoordRoundWithClamp().

inline nscoord NSToCoordRoundWithClamp(float aValue)
{
  /* cut for clarity */
  return NSToCoordRound(aValue);
}

And it goes to a rounding function.

The return trip is similar with FromAppUnits()

  static CSSIntRect FromAppUnitsRounded(const nsRect& aRect) {
    return CSSIntRect(NSAppUnitsToIntPixels(aRect.x, float(AppUnitsPerCSSPixel())),
                      NSAppUnitsToIntPixels(aRect.y, float(AppUnitsPerCSSPixel())),
                      NSAppUnitsToIntPixels(aRect.width, float(AppUnitsPerCSSPixel())),
                      NSAppUnitsToIntPixels(aRect.height, float(AppUnitsPerCSSPixel())));
  }

where

inline int32_t NSAppUnitsToIntPixels(nscoord aAppUnits, float aAppUnitsPerPixel)
{
  return NSToIntRound(float(aAppUnits) / aAppUnitsPerPixel);
/*       NSToIntRound(*/
}

which calls NStoIntRound()

inline int32_t NSToIntRound(float aValue)
{
  return NS_lroundf(aValue);
}

which calls NS_lroundf()

inline int32_t
NS_lroundf(float aNum)
{
  return aNum >= 0.0f ? int32_t(aNum + 0.5f) : int32_t(aNum - 0.5f);
}

(My C++ Kung-Fu (

http://www.otsukare.info/2016/05/26/css-width-gecko


Jen Kagan: day 3: draw it out

Четверг, 26 Мая 2016 г. 01:13 + в цитатник

i’m working on a patch for adding a button that lights up if there’s a playable URL in the page.

jared sent some links and explanations, but i also needed to draw it out.

button-logic-drawing

logic:

“OK, so you’ve got a content script, that runs in each web page, and your addon code, which isn’t in the page and can’t see it

So you’ll want your content script to always load on every page, and do a querySelectorAll check for the youtube href*= selectors used by the context menu code

Once the content script (loaded by pageMod) finds a hit in the page, it should send a message back to the main addon script, telling it to light up the button

We’ll also need to detect when a new page loads and un-highlight the button (hopefully you can do this from the addon, without involving a content script, but I’m not sure)”

step-by-step:

  1. get the script in the page to send over a message, and have the addon code just console.log something when it gets the message so you can convince yourself messaging is working. I think self.port is the addon SDK wrapper around postmessage.
  2. once you’re able to send messages over, then you can have your content script look for youtube links in the page, and send over a message that says if it found a match
  3. and then you can update the addon code to toggle the button state based on whether a matching link was found

links:

sdk guide to content scripts

port object

postmessage

page-mod

http://www.jkitppit.com/2016/05/25/day-3-draw-it-out/


Air Mozilla: The Joy of Coding - Episode 58

Среда, 25 Мая 2016 г. 20:00 + в цитатник

The Joy of Coding - Episode 58 mconley livehacks on real Firefox bugs while thinking aloud.

https://air.mozilla.org/the-joy-of-coding-episode-58/


Mozilla Open Policy & Advocacy Blog: The countdown is on: 24 months to GDPR compliance

Среда, 25 Мая 2016 г. 16:47 + в цитатник

Twenty four months from now, a new piece of legislation will apply throughout Europe: the General Data Protection Regulation (GDPR). Broadly speaking, we see the GDPR as advantageous for both users and companies, with trust and security being key components of a successful business in today’s digital age. We’re glad to see an update for European data protection law – the GDPR is replacing the earlier data protection “directive”,  95/46/EC, which was drafted over 20 years ago when only 1% of Europeans had access to the Internet. With the GDPR’s formal adoption as of 14th April 2016, the countdown to compliance has begun. Businesses operating in all 28 European Union (EU) member states have until 25th May 2018 to get ready for compliance, or face fines of up to 4% of their worldwide turnover.

The GDPR aims to modernise data protection rules for today’s digital challenges, increase harmonisation within the EU, strengthen enforcement powers, and increase user control over personal data. The Regulation moved these goals forward, although it is not without its flaws. With some elements of it, the devil will be in the details, and it remains to be seen what the impact will be in practice.

That aside, there are many good pieces of the Regulation which stand out. We want to call out five:

  1. Less is more: we welcome the reaffirmation of core privacy principles requiring that businesses should limit the amount of data they collect and justify for what purpose they collect data. At Mozilla, we put these principles into action and advocate for businesses to adopt lean data practices.
  2. Greater transparency equals smarter individual choice: we applaud the Regulation’s endorsement of transparency and user education as key assets.
  3. Privacy as the default setting: businesses managing data will have to consider privacy throughout the entire lifecycle of products and services. That means that from the day teams start designing a product, privacy must be top of mind. It also means that strong privacy should always be the “by-default setting”.
  4. Privacy and competition are mutually reinforcing: with added controls for users like the ability to port their personal data, users remain the owner of their data, even when they leave a service. Because this increases the ability to move to another provider, this creates competition and prevents user lock-in within one online platform.
  5. What’s good for the user is good for business: strengthened data and security practices also decreases the risks associated with personal data collection and processing for both users and businesses. This is not negligible: in 2015 data breaches have cost on average USD 3.79 million per impacted company, without mentioning the customer trust they lost.

Above and beyond the direct impact of the GDPR, its standard-setting potential is substantial. It is more than a purely regional regulation, as it will have global impact. Any business that markets goods or services to users in the EU will be subject to compliance, regardless of whether their business is located in the EU.

We will continue to track the implications of the GDPR over the next 24 months as it comes into force, and will stay engaged with any opportunities to work out the final details. We encourage European Internet users and businesses everywhere to join us – stay tuned as we continue to share thoughts and updates here.

https://blog.mozilla.org/netpolicy/2016/05/25/the-countdown-is-on-24-months-to-gdpr-compliance/



Поиск сообщений в rss_planet_mozilla
Страницы: 472 ... 272 271 [270] 269 268 ..
.. 1 Календарь