-Поиск по дневнику

Поиск сообщений в rss_planet_mozilla

 -Подписка по e-mail

 

 -Постоянные читатели

 -Статистика

Статистика LiveInternet.ru: показано количество хитов и посетителей
Создан: 19.06.2007
Записей:
Комментариев:
Написано: 7

Planet Mozilla





Planet Mozilla - https://planet.mozilla.org/


Добавить любой RSS - источник (включая журнал LiveJournal) в свою ленту друзей вы можете на странице синдикации.

Исходная информация - http://planet.mozilla.org/.
Данный дневник сформирован из открытого RSS-источника по адресу http://planet.mozilla.org/rss20.xml, и дополняется в соответствии с дополнением данного источника. Он может не соответствовать содержимому оригинальной страницы. Трансляция создана автоматически по запросу читателей этой RSS ленты.
По всем вопросам о работе данного сервиса обращаться со страницы контактной информации.

[Обновить трансляцию]

Air Mozilla: Reps weekly

Четверг, 30 Апреля 2015 г. 18:00 + в цитатник

Doug Belshaw: Web Literacy Map v1.5 is now live at teach.mozilla.org

Четверг, 30 Апреля 2015 г. 16:49 + в цитатник

Mozilla has soft-launched teach.mozilla.org. This provides a new home for the Web Literacy Map, which now stands at v1.5.

Web Literacy Map v1.5

While I’m a bit sad at the lack of colour compared to the previous version, at least it’s live and underpinning the ‘Teach Like Mozilla’ work!


Questions? Comments? I’m @dajbelshaw or you can email me: mail@dougbelshaw.com

http://literaci.es/teachmozilla


Gregory Szorc: Automatically Redirecting Mercurial Pushes

Четверг, 30 Апреля 2015 г. 15:30 + в цитатник

Managing URLs in distributed version control tools can be a pain, especially if multiple repositories are involved. For example, with Mozilla's repository-based code review workflow (you push to a special review repository to initiate code review - this is conceptually similar to GitHub pull requests), there exist separate code review repositories for each logical repository. Figuring out how repositories map to each other and setting up remote paths for each new clone can be a pain and time sink.

As of today, we can now do something better.

If you push to ssh://reviewboard-hg.mozilla.org/autoreview, Mercurial will automatically figure out the appropriate review repository and redirect your push automatically. In other words, if we have MozReview set up to review whatever repository you are working on, your push and review request will automatically go through. No need to figure out what the appropriate review repo is or configure repository URLs!

Here's what it looks like:

$ hg push review
pushing to ssh://reviewboard-hg.mozilla.org/autoreview
searching for appropriate review repository
redirecting push to ssh://reviewboard-hg.mozilla.org/version-control-tools/
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 1 changesets with 1 changes to 1 files
remote: Trying to insert into pushlog.
remote: Inserted into the pushlog db successfully.
submitting 1 changesets for review

changeset:  11043:b65b087a81be
summary:    mozreview: create per-commit identifiers (bug 1160266)
review:     https://reviewboard.mozilla.org/r/7953 (draft)

review id:  bz://1160266/gps
review url: https://reviewboard.mozilla.org/r/7951 (draft)
(visit review url to publish this review request so others can see it)

Read the full instructions for more details.

This requires an updated version-control-tools repository, which you can get by running mach mercurial-setup from a Firefox repository.

For those that are curious, the autoreview repo/server advertises a list of repository URLs and their root commit SHA-1. The client automatically sends the push to a URL sharing the same root commit. The code is quite simple.

While this is only implemented for MozReview, I could envision us doing something similar for other centralized repository-centric services, such as Try and Autoland. Stay tuned.

http://gregoryszorc.com/blog/2015/04/30/automatically-redirecting-mercurial-pushes


Nick Cameron: rustfmt - call for contributions

Четверг, 30 Апреля 2015 г. 06:36 + в цитатник
I've been experimenting with a rustfmt tool for a while now. Its finally in working shape (though still very, very rough) and I'd love some help on making it awesome.

rustfmt is a reformatting tool for Rust code. The idea is that it takes your code, tidies it up, and makes sure it conforms to a set of style guidelines. There are similar tools for C++ (clang format), Go (gofmt), and many other languages. Its a really useful tool to have for a language, since it makes it easy to adhere to style guidelines and allows for mass changes when guidelines change, thus making it possible to actually change the guidelines as needed.

Eventually I would like rustfmt to do lots of cool stuff like changing glob imports to list imports, or emit refactoring scripts to rename variables to adhere to naming conventions. In the meantime, there are lots of interesting questions about how to lay out things like function declarations and match expressions.

My approach to rustfmt is very incremental. It is usable now and gives good results, but it only touches a tiny subset of language items, for example function definitions and calls, and string literals. It preserves code elsewhere. This makes it immediately useful.

I have managed to run it on several crates (or parts of crates) in the rust distro. It also bootstraps, i.e., you can rustfmt on rustfmt before every check-in, in fact this is part of the test suite.

It would be really useful to have people running this tool on their own code or on other crates in the rust distro, and filing issues and/or test cases where things go wrong. This should actually be a useful tool to run, not just a chore, and will get more useful with time.

It's a great project to hack on - you'll learn a fair bit about the Rust compiler's frontend and get a great understanding of more corners of the language than you'll ever want to know about. It's early days too, so there is plenty of scope for having a big impact on the project. I find it a lot of fun too! Just please forgive some of the hackey code that I've already written.

Here is the rustfmt repo on GitHub. I just added a bunch of information to the repo readme which should help new contributors. Please let me know if there is other information that should go in there. I've also created some good issues for new contributors. If you'd like to help out and need help, please ping me on irc (I'm nrc).

http://featherweightmusings.blogspot.com/2015/04/rustfmt-call-for-contributions.html


Anthony Hughes: The Testday Brand

Четверг, 30 Апреля 2015 г. 02:58 + в цитатник

Over the last few months I’ve been surveying people who’ve participated in testdays. The purpose of this effort is to develop an understanding of the current “brand” that testdays present. I’ve come to realize that our goal to “re-invigorate” the testdays program was based on assumptions that testdays were both well-known and misunderstood. I wanted to cast aside that assumption and make gains on a new plan which includes developing a positive brand.

The survey itself was quite successful as I received over 200 responses, 10x what I normally get out of my surveys. I suspect this was because I kept it short, under a minute to complete; something I will keep in mind for the future.

Who Shared the Most?

testday-responses

When looking out who responded most, the majority were unaffiliated with QA (53%). Of the 47% who were affiliated with QA nearly two thirds were volunteers.

How do these see themselves?

testday-mozillians

When looking at how respondents self-identified, only people who identified as volunteers did not self-identify as a Mozillian. In terms of vouching, people affiliated with QA seem to have a higher proportion of vouched Mozillians than those unaffiliated with QA. This tells me that we need to be doing a better job of converting new contributors into Mozillians and active into vouched Mozillians.

What do they know about Testdays?

testday-familiarity

When looking at how familiar people are with the concept of testdays, people affiliated with QA are most aware while people outside of QA are least aware. No group of people are 100% familiar with testdays which tells me we need to do a better job of educating people about testdays.

What do they think about Testdays?

testday-keywords

Most respondents identified testdays with some sort of activity (30%), a negative feeling (22%), a community aspect (15%), or a specific product (15%). Positive characteristics were lowest on the list (4%). This was probably the most telling question I asked as it really helps me see the current state of the brand of testdays and not just for the responses I received. Reading between the lines, looking for what is not there, I can see testdays relate poorly to anything outside the scope of blackbox testing on Firefox (eg. automation, services, web qa, security qa, etc).

Where do I go from here?

1. We need to diversify the testday brand to be more about testing Firefox and expand it to enable testing across all areas in need.

2. We need to solve some of the negative brand associations by making activities more understandable and relevant, by having shorter events more frequently, , and by rewarding contributions (even those who do work that doesn’t net a bug).

3. We need to teach people that testdays are about more than just testing. Things like writing tests, writing new documentation, updating and translating existing documentation, and mentoring newcomers is all part of what testdays can enable.

4. Once we’ve identified the brand we want to put forward, we need to do a much better job of frequently educating and re-educating people about testdays and the value they provide.

5. We need to enable testdays to facilitate converting newcomers into Mozillians and active contributors into vouched Mozillians.

My immediate next step is to have the lessons I’ve learned here integrated into a plan of action to rebrand testdays. Rest assured I am going to continue to push my peers on this, to be an advocate for improving the ways we collaborate, and to continually revisit the brand to make sure we aren’t losing sight of reality.

I’d like to end with a thank you to everyone who took the time to respond to my survey. As always, please leave a comment below if you have any interesting insights or questions.

Thank you!

https://ashughes.com/?p=309


Air Mozilla: Quality Team (QA) Public Meeting

Среда, 29 Апреля 2015 г. 23:30 + в цитатник

Quality Team (QA) Public Meeting This is the meeting where all the Mozilla quality teams meet, swap ideas, exchange notes on what is upcoming, and strategize around community building and...

https://air.mozilla.org:443/quality-team-qa-public-meeting-20150429/


Niko Matsakis: On reference-counting and leaks

Среда, 29 Апреля 2015 г. 22:39 + в цитатник

What’s a 1.0 release without a little drama? Recently, we discovered that there was an oversight in one of the standard library APIs that we had intended to stabilize. In particular, we recently added an API for scoped threads – that is, child threads which have access to the stack frame of their parent thread.

The flaw came about because, when designing the scoped threads API, we failed to consider the impact of resource leaks. Rust’s ownership model makes it somewhat hard to leak data, but not impossible. In particular, using reference-counted data, you can construct a cycle in the heap, in which case the components of that cycle may never be freed.

Some commenters online have taken this problem with the scoped threads API to mean that Rust’s type system was fundamentally flawed. This is not the case: Rust’s guarantee that safe code is memory safe is as true as it ever was. The problem was really specific to the scoped threads API, which was making flawed assumptions; this API has been marked unstable, and there is an RFC proposing a safe alternative.

That said, there is an interesting, more fundamental question at play here. We long ago decided that, to make reference-counting practical, we had to accept resource leaks as a possibility. But some recent proposals have suggested that we should place limits on the Rc type to avoid some kinds of reference leaks. These limits would make the original scoped threads API safe. However, these changes come at a pretty steep price in composability: they effectively force a deep distinction between “leakable” and “non-leakable” data, which winds up affecting all levels of the system.

This post is my attempt to digest the situation and lay out my current thinking. For those of you don’t want to read this entire post (and I can’t blame you, it’s long), let me just copy the most salient paragraph from my conclusion:

This is certainly a subtle issue, and one where reasonable folk can disagree. In the process of drafting (and redrafting…) this post, my own opinion has shifted back and forth as well. But ultimately I have landed where I started: the danger and pain of bifurcating the space of types far outweighs the loss of this particular RAII idiom.

All right, for those of you who want to continue, this post is divided into three sections:

  1. Section 1 explains the problem and gives some historical background.
  2. Section 2 explains the “status quo”.
  3. Section 3 covers the proposed changes to the reference-counted type and discusses the tradeoffs involved there.

Section 1. The problem in a nutshell

Let me start by summarizing the problem that was uncovered in more detail. The root of the problem is an interaction between the reference-counting and threading APIs in the standard library. So let’s look at each in turn. If you’re familiar with the problem, you can skip ahead to section 2.

Reference-counting as the poor man’s GC

Rust’s standard library includes the Rc and Arc types which are used for reference-counted data. These are widely used, because they are the most convenient way to create data whose ownership is shared amongst many references rather than being tied to a particular stack frame.

Like all reference-counting systems, Rc and Arc are vulnerable to reference-count cycles. That is, if you create a reference-counted box that contains a reference to itself, then it will never be collected. To put it another way, Rust gives you a lot of safety guarantees, but it doesn’t protect you from memory leaks (or deadlocks, which turns out to be a very similar problem).

The fact that we don’t protect against leaks is not an accident. This was a deliberate design decision that we made while transitioning from garbage-collected types (@T and @mut T) to user-defined reference counting. The reason is that preventing leaks requires either a runtime with a cycle collector or complex type-system tricks. The option of a mandatory runtime was out, and the type-system tricks we explored were either too restrictive or too complex. So we decided to make a pragmatic compromise: to document the possibility of leaks (see, for example, this section of the Rust reference manual) and move on.

In practice, the possibility of leaks is mostly an interesting technical caveat: I’ve not found it to be a big issue in practice. Perhaps because problems arose so rarely in practice, some things—like leaks—that should not have been forgotten were… partially forgotten. History became legend. Legend became myth. And for a few years, the question of leaks seemed to be a distant, settled issue, without much relevance to daily life.

Thread and shared scopes

With that background on Rc in place, let’s turn to threads. Traditionally, Rust threads were founded on a “zero-sharing” principle, much like Erlang. However, as Rust’s type system evolved, we realized we could do much betterthe same type system rules that ensured memory safe in sequential code could be used to permit sharing in parallel code as well, particularly once we adopted RFC 458 (a brilliant insight by pythonesque).

The basic idea is to start a child thread that is tied to a particular scope in the code. We want to guarantee that before we exit that scope, the thread will be joined. If we can do this, then we can safely permit that child thread access to stack-allocated data, so long as that data outlives the scope; this is safe because Rust’s type-system rules already ensure that any data shared between multiple threads must be immutable (more or less, anyway).

So the question then is how can we designate the scope of the children threads, and how can we ensure that the children will be joined when that scope exits. The original proposal was based on closures, but in the time since it was written, the language has shifted to using more RAII, and hence the scoped API is based on RAII. The idea is pretty simple. You write a call like the following:

1
2
3
4
5
fn foo(data: &[i32]) {
  ...
  let guard = thread::scoped(|| /* body of the child thread */);
  ...
}

The scoped function takes a closure which will be the body of the child thread. It returns to you a guard value: running the destructor of this guard will cause the thread to be joined. This guard is always tied to a particular scope in the code. Let’s call the scope 'a. The closure is then permitted access to all data that outlives 'a. For example, in the code snippet above, 'a might be the body of the function foo. This means that the closure could safely access the input data, because that must outlive the fn body. The type system ensures that no reference to the guard exists outside of 'a, and hence we can be sure that guard will go out of scope sometime before the end of 'a and thus trigger the thread to be joined. At least that was the idea.

The conflict

By now perhaps you have seen the problem. The scoped API is only safe if we can guarantee that the guard’s destructor runs, so that the thread will be joined; but, using Rc, we can leak values, which means that their destructors never run. So, by combining Rc and scoped, we can cause a thread to be launched that will never be joined. This means that this thread could run at any time and try to access data from its parents stack frame – even if that parent has already completed, and thus the stack frame is garbage. Not good!

So where does the fault lie? From the point of view of history, it is pretty clear: the scoped API was ill designed, given that Rc already existed. As I wrote, we had long ago decided that the most practical option was to accept that leaks could occur. This implies that if the memory safety of an API depends on a destructor running, you can’t relinquish ownership of the value that carries that destructor (because the end-user might leak it).

It is totally possible to fix the scoped API, and in fact there is already an RFC showing how this can be done (I’ll summarize it in section 2, below). However, some people feel that the decision we made to permit leaks was the wrong one, and that we ought to have some limits on the RC API to prevent leaks, or at least prevent some leaks. I’ll dig into those proposals in section 3.

Section 2. What is the impact of leaks on the status quo?

So, if we continue with the status quo, and accept that resource leaks can occur with Rc and Arc, what is the impact of that? At first glance, it might seem that the possibility of resource leaks is a huge blow to RAII. After all, if you can’t be sure that the destructor will run, how can you rely on the destructor to do cleanup? But when you look closer, it turns out that the problem is a lot more narrow.

“Average Rust User”

I think it’s helpful to come at this problem from two difference perspectives. The first is: what do resource leaks mean for the average Rust user? I think the right way to look at this is that the user of the Rc API has an obligation to avoid cycle leaks or break cycles. Failing to do so will lead to bugs – these could be resource leaks, deadlocks, or other things. But leaks cannot lead to memory unsafety. (Barring invalid unsafe code, of course.)

It’s worth pointing out that even if you are using Rc, you don’t have to worry about memory leaks due to forgetting to decrement a reference or anything like that. The problem really boils down to ensuring that you have a clear strategy for avoiding cycles, which usually boils to an “ownership DAG” of strong references (though in some cases, breaking cycles explicitly may also be an option).

“Author of unsafe code”

The other perspective to consider is the person who is writing unsafe code. Unsafe code frequently relies on destructors to do cleanup. I think the right perspective here is to view a destructor as akin to any other user-facing function: in particular, it is the user’s responsibility to call it, and they may accidentally fail to do so. Just as you have to write your API to be defensive about users invoking functions in the wrong order, you must be defensive about them failing to invoke destructors due to a resource leak.

It turns out that the majority of RAII idioms are actually perfectly memory safe even if the destructors don’t run. For example, if we examine the Rust standard library, it turns out that all of the destructors therein are either optional or can be made optional:

  1. Straight-forward destructors like Box or Vec leak memory if they are not freed; clearly no worse than the original leak.
  2. Leaking a mutex guard means that the mutex will never be released. This is likely to cause deadlock, but not memory unsafety.
  3. Leaking a RefCell guard means that the RefCell will remain in a borrowed state. This is likely to cause thread panic, but not memory unsafety.
  4. Even fancy iterator APIs like drain, which was initially thought to be problematic, can be implemented in such a way that they cause leaks to occur if they are leaked, but not memory unsafety.

In all of these cases, there is a guard value that mediates access to some underlying value. The type system already guarantees that the original value cannot be accessed while the guard is in scope. But how can we ensure safety outside of that scope in the case where the guard is leaked? If you look at the the cases above, I think they can be grouped into two patterns:

  1. Ownership: Things like Box and Vec simply own the values they are protecting. This means that if they are leaked, those values are also leaked, and hence there is no way for the user to access it.
  2. Pre-poisoning: Other guards, like MutexGuard, put the value they are protecting into a poisoned state that will lead to dynamic errors (but not memory unsafety) if the value is accessed without having run the destructor. In the case of MutexGuard, the “poisoned” state is that the mutex is locked, which means a later attempt to lock it will simply deadlock unless the MutexGuard has been dropped.

What makes scoped threads different?

So if most RAII patterns continue to work fine, what makes scoped different? I think there is a fundamental difference between scoped and these other APIs; this difference was well articulated by Kevin Ballard:

thread::scoped is special because it’s using the RAII guard as a proxy to represent values on the stack, but this proxy is not actually used to access those values.

If you recall, I mentioned above that all the guards serve to mediate access to some value. In the case of scoped, the guard is mediating access to the result of a computation – the data that is being protected is “everything that the closure may touch”. The guard, in other words, doesn’t really know the specific set of affected data, and it thus cannot hope to either own or pre-poison the data.

In fact, I would take this a step farther, and say that I think that in this kind of scenario, where the guard doesn’t have a connection to the data being protected, RAII tends to be a poor fit. This is because, generally, the guard doesn’t have to be used, so it’s easy for the user to accidentally drop the guard on the floor, causing the side-effects of the guard (in this case, joining the thread) to occur too early. I’ll spell this out a bit more in the section below.

Put more generally, accepting resource leaks does mean that there is a Rust idiom that does not work. In particular, it is not possible to create a borrowed reference that can be guaranteed to execute arbitrary code just before it goes out of scope. What we’ve seen though is that, frequently, it is not necessary to guarantee that the code will execute – but in the case of scoped, because there is no direct connection to the data being protected, joining the thread is the only solution.

Using closures to guarantee code execution when exiting a scope

If we can’t use an RAII-based API to ensure that a thread is joined, what can we do? It turns out that there is a good alternative, laid out in RFC 1084. The basic idea is to restructure the API so that you create a “thread scope” and spawn threads into that scope (in fact, the RFC lays out a more general version that can be used not only for threads but for any bit of code that needs guaranteed execution on exit from a scope). This thread scope is delinated using a closure. In practical terms, this means that started a scoped thread look something like this:

1
2
3
4
5
6
7
fn foo(data: &[i32]) {
  ...
  thread::scope(|scope| {
    let future = scope.spawn(|| /* body of the child thread */);
    ...
  });
}

As you can see, whereas before calling thread::scoped started a new thread immediately, it now just creates a thread scope – it doesn’t itself start any threads. A borrowed reference to the thread scope is passed to a closure (here it is the argument scope). The thread scope offers a method spawn that can be used to start a new thread tied to a specific scope. This thread will be joined when the closure returns; as such, it has access to any data that outlives the body of the closure. Note that the spawn method still returns a future to the result of the spawned thread; this future is similar to the old join guard, because it can be used to join the thread early. But this future doesn’t have a destructor. If the thread is not joined through the future, it will still be automatically joined when the closure returns.

In the case of this particular API, I think closures are a better fit than RAII. In particular, the closure serves to make the scope where the threads are active clear and explicit; this in turn avoids certain footguns that were possible with the older, RAII-based API. To see an example of what I mean, consider this code that uses the old API to do a parallel quicksort:

1
2
3
4
5
6
7
8
fn quicksort(data: &mut [i32]) {
  if data.len() <= 1 { return; }
  let pivot = data.len() / 2;
  let index = partition(data, pivot);
  let (left, right) = data.split_at_mut(data, index);
  let _guard1 = thread::scoped(|| quicksort(left));
  let _guard2 = thread::scoped(|| quicksort(right));
}

I want to draw attention to one snippet of code at the end:

1
2
3
  let (left, right) = data.split_at_mut(data, index);
  let _guard1 = thread::scoped(|| quicksort(left));
  let _guard2 = thread::scoped(|| quicksort(right));

Notice that we have to make dummy variables like _guard1 and _guard2. If we left those variables off, then the thread would be immediately joined, which means we wouldn’t get any actual parallelism. What’s worse, the code would still work, it would just run sequentially. The need for these dummy variables, and the resulting lack of clarity about just when parallel threads will be joined, is a direct result of using RAII here.

Compare that code above to using a closure-based API:

1
2
3
4
5
  thread::scope(|scope| {
    let (left, right) = data.split_at_mut(data, index);
    scope.spawn(|| quicksort(left));
    scope.spawn(|| quicksort(right));
  });

I think it’s much clearer. Moreover, the closure-based API opens the door to other methods that could be used with scope, like convenience methods to do parallel maps and so forth.

Section 3. Can we prevent (some) resource leaks?

Ok, so in the previous two sections, I summarized the problem and discussed the impact of resource leaks on Rust. But what if we could avoid resource leaks in the first place? There have been two RFCs on this topic: RFC 1085 and RFC 1094.

The two RFCs are quite different in the details, but share a common theme. The idea is not to avoid all resource leaks altogether; I think everyone recognizes that this is not practical. Instead, the goal is to try and divide types into two groups: those that can be safely leaked, and those that cannot. You then limit the Rc and Arc types so that they can only be used with types that can safely be leaked.

This approach seems simple but it has deep ramifications. It means that Rc and Arc are no longer fully general container types. Generic code that wishes to operate on data of all types (meaning both types that can and cannot leak) can’t use Rc or Arc internally, at least not without some hard choices.

Rust already has a lot of precedent for categorizing types. For example, we use a trait Send to designate “types that can safely be transferred to other threads”. In some sense, dividing types into leak-safe and not-leak-safe is analogous. But my experience has been that every time we draw a fundamental distinction like that, it carries a high price. This distinction “bubbles up” through APIs and affects decisions at all levels. In fact, we’ve been talking about one case of this rippling effect through this post – the fact that we have two reference-counting types, one atomic (Arc) and one not (Rc), is precisely because we want to distinguish thread-safe and non-thread-safe operations, so that we can get better performance when thread safety is not needed.

What this says to me is that we should be very careful when introducing blanket type distinctions. The places where we use this mechanism today – thread-safety, copyability – are fundamental to the language, and very important concepts, and I think they carry their weight. Ultimately, I don’t think resource leaks quite fit the bill. But let me dive into the RFCs in question and try to explain why.

RFC 1085 – the Leak trait

The first of the two RFCs is RFC 1085. This RFC introduces a trait called Leak, which operates exactly like the existing Send trait. It indicates “leak-safe” data. Like Send, it is implemented by default. If you wish to make leaks impossible for a type, you can explicitly opt out with a negative impl like impl !Leak for MyType. When you create a Rc or Arc, either T: Leak must hold, or else you must use an unsafe constructor to certify that you will not create a reference cycle.

The fact that Leak is automatically implemented promises to make it mostly invisible. Indeed, in the prototype that Jonathan Reem implemented, he found relatively little fallout in the standard library and compiler. While encouraging, I still think we’re going to encounter problems of composability over time.

There are a couple of scenarios where the Leak trait will, well, leak into APIs where it doesn’t seem to belong. One of the most obvious is trait objects. Imagine I am writing a serialization library, and I have a Serializer type that combines an output stream (a Box) along with some serialization state:

1
2
3
4
5
struct Serializer {
  output_stream: Box<Writer>,
  serialization_state: u32,
  ...
}

So far so good. Now someone else comes along and would like to use my library. They want to put this Serializer into a reference counted box that is shared amongst many users, so they try to make a Rc. Unfortunately, this won’t work. This seems somewhat surprising, since weren’t all types were supposed to be Leak by default?

The problem lies in the Box object – an object is designed to hide the precise type of Writer that we are working with. That means that we don’t know whether this particular Writer implements Leak or not. For this client to be able to place Serializer into an Rc, there are two choices. The client can use unsafe code, or I, the library author, can modify my Serializer definition as follows:

1
2
3
4
5
struct Serializer {
  output_stream: Box<Writer+Leak>,
  serialization_state: u32,
  ...
}

This is what I mean by Leak “bubbling up”. It’s already the case that I, as a library author, want to think about whether my types can be used across threads and try to enable that. Under this proposal, I also have to think about whether my types should be usable in Rc, and so forth.

Now, if you avoid trait objects, the problem is smaller. One advantage of generics is that they don’t encapsulate what type of writer you are using and so forth, which means that the compiler can analyze the type to see whether it is thread-safe or leak-safe or whatever. Until now we’ve found that many libraries avoid trait objects partly for this reason, and I think that’s good practice in simple cases. But as things scale up, encapsulation is a really useful mechanism for simplifying type annotations and making programs concise and easy to work with.

There is one other point. RFC 1085 also includes an unsafe constructor for Rc, which in principle allows you to continue using Rc with any type, so long as you are in a position to assert that no cycles exist. But I feel like this puts the burden of unsafety into the wrong place. I think you should be able to construct reference-counted boxes, and truly generic abstractions built on reference-counted boxes, without writing unsafe code.

My allergic reaction to requiring unsafe to create Rc boxes stems from a very practical concern: if we push the boundaries of unsafety too far out, such that it is common to use an unsafe keyword here and there, we vastly weaken the safety guarantees of Rust in practice. I’d rather that we increase the power of safe APIs at the cost of more restrictions on unsafe code. Obviously, there is a tradeoff in the other direction, because if the requirements on unsafe code become too subtle, people are bound to make mistakes there too, but my feeling is that requiring people to consider leaks doesn’t cross that line yet.

RFC 1094 – avoiding reference leaks

RFC 1094 takes a different tack. Rather than dividing types arbitrarily into leak-safe and not-leak-safe, it uses an existing distinction, and says that any type which is associated with a scope cannot leak.

The goal of RFC 1094 is to enable a particular “mental model” about what lifetimes mean. Specifically, the RFC aims to ensure that if a value is limited to a particular scope 'a, then the value will be destroyed before the program exits the scope 'a. This is very similar to what Rust currently guarantees, but stronger: in current Rust, there is no guarantee that your value will be destroyed, there is only a guarantee that it will not be accessed outside that scope. Concretely, if you leak an Rc into the heap today, that Rc may contain borrowed references, and those references could be invalid – but it doesn’t matter, because Rust guarantees that you could never use them.

In order to guarantee that borrowed data is never leaked, RFC 1094 requires that to construct a Rc (or Arc), the condition T: 'static must hold. In other words, the payload of a reference-counted box cannot contain borrowed data. This by itself is very limiting: lots of code, including the rust compiler, puts borrowed pointers into reference-counted structures. To help with this, the RFC includes a second type of reference-counted box, ScopedRc. To use a ScopedRc, you must first create a reference-counting scope s. You can then create new ScopedRc instances associated with s. These ScopedRc instances carry their own reference count, and so they will be freed normally as soon as that count drops to zero. But if they should get placed into a cycle, then when the scope s is dropped, it will go along and “cycle collect”, meaning that it runs the destructor for any ScopedRc instances that haven’t already been freed. (Interestingly, this is very similar to the closure-based scoped thread API, but instead of joining threads, exiting the scope reaps cycles.)

I originally found this RFC appealing. It felt to me that it avoided adding a new distinction (Leak) to the type system and instead piggybacked on an existing one (borrowed vs non-borrowed). This seems to help with some of my concerns about “ripple effects” on users.

However, even though it piggybacks on an existing distinction (borrowed vs static), the RFC now gives that distinction additional semantics it didn’t have before. Today, those two categories can be considered on a single continuum: for all types, there is some bounding scope (which may be 'static), and the compiler ensures that all accesses to that data occur within that scope. Under RFC 1094, there is a discontinuity. Data which is bounded by 'static is different, because it may leak.

This discontinuity is precisely why we have to split the type Rc into two types, Rc and ScopedRc. In fact, the RFC doesn’t really mention Arc much, but presumably there will have to be both ScopedRc and a ScopedArc types. So now where we had only two types, we have four, to account for this new axis:

1
2
3
4
5
6
|-----------------++--------+----------|
|                 || Static | Borrowed |
|-----------------++--------+----------|
| Thread-safe     || Rc     | RcScoped |
| Not-thread-safe || Arc    | ArcScope |
|-----------------++--------+----------|

And, in fact, the distinction doesn’t end here. There are abstractions, such as channels, that built on Arc. So this means that this same categorization will bubble up through those abstractions, and we will (presumably) wind up with Channel and ChannelScoped (otherwise, channels cannot be used to send borrowed data to scoped threads, which is a severe limitation).

Section 4. Conclusion.

This concludes my deep dive into the question of resource leaks. It seems to me that the tradeoffs here are not simple. The status quo, where resource leaks are permitted, helps to ensure composability by allowing Rc and Arc to be used uniformly on all types. I think this is very important as these types are vital building blocks.

On a historical note, I am particularly sensitive to concerns of composability. Early versions of Rust, and in particular the borrow checker before we adopted the current semantics, were rife with composability problems. This made writing code very annoying – you were frequently refactoring APIs in small ways to account for this.

However, this composability does come at the cost of a useful RAII pattern. Without leaks, you’d be able to use RAII to build references that reliably execute code when they are dropped, which in turn allows RAII-like techniques to be used more uniformly across all safe APIs.

This is certainly a subtle issue, and one where reasonable folk can disagree. In the process of drafting (and redrafting…) this post, my own opinion has shifted back and forth as well. But ultimately I have landed where I started: the danger and pain of bifurcating the space of types far outweighs the loss of this particular RAII idiom.

Here are the two most salient points to me:

  1. The vast majority of RAII-based APIs are either safe or can be made safe with small changes. The remainder can be expressed with closures.
    • With regard to RAII, the scoped threads API represents something of a “worst case” scenario, since the guard object is completely divorced from the data that the thread will access.
    • In cases like this, where there is often no need to retain the guard, but dropping it has important side-effects, RAII can be a footgun and hence is arguably a poor fit anyhow.
  2. The cost of introducing a new fundamental distinction (“leak-safe” vs “non-leak-safe”) into our type system is very high and will be felt up and down the stack. This cannot be completely hidden or abstracted away.
    • This is similar to thread safety, but leak-safety is far less fundamental.

Bottom line: the cure is worse than the disease.

http://smallcultfollowing.com/babysteps/blog/2015/04/29/on-reference-counting-and-leaks/


Doug Belshaw: Guiding Students as They Explore, Build, and Connect Online

Среда, 29 Апреля 2015 г. 21:38 + в цитатник

Ian O'Byne, Greg McVerry and I have just had an article published in the Journal of Adolescent & Adult Literacy (JAAL). Entitled Guiding Students as They Explore, Build, and Connect Online it’s an attempt to situate and explain the importance of the Web Literacy Map work.

Publish

I’d have preferred it be published in an open access journal, but there was a window of opportunity that we decided to take advantage of. Happily, you can access the pre-publication version via Ian’s blog here.

Related:

Cite this article:

McVerry, J.G., Belshaw, D. & Ian O'Byrne, W. (2015). Guiding Students as They Explore, Build, and Connect Online. Journal of Adolescent & Adult Literacy, 58(8), 632–635. doi: 10.1002/jaal.411

http://literaci.es/guiding-students


Air Mozilla: Product Coordination Meeting

Среда, 29 Апреля 2015 г. 21:00 + в цитатник

Product Coordination Meeting Duration: 10 minutes This is a weekly status meeting, every Wednesday, that helps coordinate the shipping of our products (across 4 release channels) in order...

https://air.mozilla.org:443/product-coordination-meeting-20150429/


Eitan Isaacson: eSpeak Web Speech API Addon

Среда, 29 Апреля 2015 г. 20:27 + в цитатник

Now that eSpeak runs pretty well in JS, it is time for a Web Speech API extension!

What is the Web Speech API? It gives any website access to speech synthesis (and recognition) functionality, Chrome and Safari already have this built-in. This extension adds speech synthesis support in Firefox, and adds eSpeak voices.

For the record, we had speech synthesis support in Gecko for about 2 years. It was introduced for accessibility needs in Firefox OS, now it is time to make sure it is supported on desktop as well.

Why an extension instead of built-in support? A few reasons:

  1. An addon will provide speech synthesis to Firefox now as we implement built-in platform-specific solutions for future releases.
  2. An addon will allow us to surface current bugs both in our Speech API implementation, and in the spec.
  3. We designed our speech synthesis implementation to be extensible with addons, this is a good proof of concept.
  4. People are passionate about eSpeak. Some people love it, some people don’t.

So now I will shut up, and let eSpeak do the talking:


http://blog.monotonous.org/2015/04/29/espeak-web-speech-api-addon/


Mozilla Release Management Team: Firefox 38 beta6 to beta8

Среда, 29 Апреля 2015 г. 11:53 + в цитатник

This beta was harder then usual to release. For the 38.0.5 dev cycle, we decided to merge mozilla-beta into mozilla-release. As this is unusual, we encountered several issues:

  • Some l10n updates had to be run (beta2release_l10n.sh). To help with the diagnostics, we reported bug 1158126.
  • The automation tool had an expectation that, coming from mozilla-release, the version would be a release (bug 1158124). We hardcoded a temporary fix.

Because it took some time to fix these issues, we haven't been able to publish beta 7 last week. We decided to skip beta 7 from releasing and start the beta 8 build on Sunday evening. Unfortunately, this didn't go smoothly either:

  • During the merge, we update some of the configurations. This caused beta 8 build 1 to be built using the release branding. This change was backout. See bug 1158760 for more information.
  • Last but not least, because of the previous issue, we had to do a second build of 38 beta 8. This caused some CDN issue and it took a while to get that resolved. We also reported a bug to simplify this in the future.

Besides that, these two betas are regular. We disabled readling list and reader view (reader view is going to ship into 38.0.5). We took some fixes for EME and MSE.

Finally, we took some stability fixes.

  • 56 changesets
  • 131 files changed
  • 1488 insertions
  • 1911 deletions

ExtensionOccurrences
cpp36
js32
h12
jsm8
java7
ini7
css5
list3
in3
html3
jsx2
idl2
build2
xml1
txt1
sh1
mk1
hgtags1
gyp1
dep1
cc1

ModuleOccurrences
browser35
dom16
toolkit14
media12
mobile10
js10
security7
gfx6
layout5
widget3
testing3
docshell3
services2
netwerk1
mozglue1
modules1

List of changesets:

David KeelerBug 1150114 - Allow PrintableString to match UTF8String in name constraints checking. r=briansmith, a=sledru - a00d0de3202f
Justin DolskeBug 1155191 - Please disable readling list and reader view for 38. r=markh, a=me - 69173cc17556
Kai EngertBug 1156428 - Upgrade Firefox 38 to use NSS 3.18.1, a=dveditz - 8f9c08f19f6a
Patrick BrossetBug 1155172 - Intermittent browser_webconsole_notifications.js. r=past, a=test-only - 52322e98f739
Matt WoodrowBug 1154536 - Disable 4k H264 video for vista since it performs poorly. r=ajones, a=sledru - 650ed1bb5a04
Philipp KewischBug 1153192 - Cannot pass extra arguments to l10n-repack.py. r=gps, a=lmandel - d1e5b60cd47c
Chris PearceBug 1156131 - Use correct DLL on WinVista, 7, and 8 for WMF decoding in gmp-clearkey. r=edwin, a=sledru - e7210d2ce8a9
Chris PearceBug 1156131 - Expand list of WMF DLLs that are whitelisted for use by EME plugins. r=bobowen, a=sledru - 5712fefbace8
Mark HammondBug 1152193 - Ensure sync/readinglist log directory exists. r=rnewman, a=sledru - fc98815acf5f
Ed LeeBug 1156921 - Backout Suggested Tiles (Bug 1120311) from 38.0 [a=sylvestre] - d7ca3b75c842
Ryan VanderMeulenBug 1123563 - Skip test-animated-image-layers.html and test-animated-image-layers-background.html on Android and Linux. a=test-only - 1cd478c3e0b5
Hannes VerschoreBug 1140890 - Make sure the first argument cannot bail in between negative zero removal and creating result in substraction. r=nbp, a=sledru - d55fdde73ac8
Valentin GosuBug 1145812 - Fix assertion with dom.url.encode_decode_hash pref set to true. r=mcmanus, a=sledru - 5f0e381a7afd
Hannes VerschoreBug 1143878 - IonMonkey: Test conversion of MToInt32 for testing congruence. r=jandem, a=sledru - 0b3c5b65610e
Valentin GosuBug 1149913 - Disable Bug 1093611. Set pref dom.url.encode_decode_hash to true. r=honzab, a=sledru - a9be9167d92b
Chris PearceBug 1155432 - Don't flush WMF PDM task queues. r=jya, a=sledru - 0920ace0d8b0
Julian SewardBug 1153173 - Uninitialised value use in AutoJSExceptionReporter::~AutoJSExceptionReporter. r=aklotz, a=sledru - 92fb098ace7a
Jean-Yves AvenardBug 1154683 - Fix potential size overflow. r=kentuckyfriedtakahe, a=sledru - 22f8fa3a9273
Milan SreckovicBug 1133119 - ::Map should fail if the data is null, and add more null pointer checks. r=mattwoodrow, a=sledru - 90d2538212ab
Florian Qu`ezeBug 1109728 - Intermittent browser_devices_get_user_media.js | popup WebRTC indicator visible - Got false, expected true. r=Gijs, a=test-only - fe8c5e74565f
Florian Qu`ezeBug 1126107 - Intermittent browser_devices_get_user_media.js | WebRTC indicator hidden - Got true, expected false. r=Gijs, a=test-only - 8d4a0b33d32e
Jim MathiesBug 1100501 - Add StatisticsRecorder initialization to xpcshell. r=georg, a=sledru - 71d1d59db847
Jim MathiesBug 1100501 - Avoid a late shutdown of chromium's StatisticsRecorder. r=georg, a=sledru - 8661ed4cbdb9
Mark BannerBug 1153630 - Allow buttons in the Loop panel to be bigger if required as L10n needs. r=dmose, a=sledru - a6fe316e7571
Milan SreckovicBug 1154003 - More protection for failed surface drawable creation. r=bas, a=sledru - 474ffd404414
Valentin GosuBug 1139831 - End timestamps are before start timestamps. r=baku, a=sledru - 9fe28719e4fd
Mats PalmgrenBug 1152354 - Re-introduce the incremental reflow hack in nsSimplePageSequenceFrame for now, since the regressions are worse than the original problem (Bug 1108104). r=roc, a=sledru - 92a269ca564d
Garvan KeeleyBug 1155237 - Part 1: Remove contextless access to NetworkUtils, causes NPE. r=rnewman, a=sledru - 1ec2ee773b51
Garvan KeeleyBug 1155237 - Part 2: Make upload service non-sticky. r=rnewman, a=sledru - 645fc5aa6a49
Mark BannerBug 1145541. r=mikedeboer, a=sledru - db41e8e267ed
Ryan VanderMeulenBug 1108104 - Fix rebase bustage. a=bustage - df5d106c2607
Ryan VanderMeulenBug 1152354 - Remove no longer needed assertion expectation. a=orange - 50550eca1fa2
JW WangBug 1091155 - Don't check if 'playing' has fired for it depends on how fast decoding is which is not reliable. r=cpearce, a=test-only - 2161d1dc7e2b
Randell JesupBug 1151628 - Re-enable MJPEG in libyuv (especially for getUserMedia). r=glandium, a=sledru - f6448c4cf87f
Randell JesupBug 1152016 - Suppress fprintf(stderr)'s from jpeg in MJPEG decode. r=pkerr, a=sledru - 97d33db56113
Ganesh SahukariBug 1009465 - Set the read-only attribute for temporary downloads on Windows. r=paolo, a=sledru - b7d8d79c1ee5
Tom SchusterBug 1152550 - Make sure that cross-global Iterator can not be broken. r=Waldo, a=sledru - 6b096f9b31d3
Mark FinkleBug 1154960 - Fennec should explicitly block the DOM SiteSpecificUserAgent.js file from packaging. r=nalexander, a=sledru - da1d9ba28360
Richard NewmanBug 1155684 - Part 0: Disable reading list sync in confvars.sh. r=nalexander, a=sledru - 18c8180670c7
Richard NewmanBug 1155684 - Part 1-3: Remove reading list sync integration. r=nalexander, a=sledru - 309ed42a5999
Richard MartiBug 1156913 - Use highlighttext color also for :active menus. r=Gijs, a=sledru - 98086516ce8f
Edwin FloresBug 1156560 - Prefer old CDMs on update if they are in use. r=cpearce, ba=sledru - 7c66212e4c09
Ryan VanderMeulenBacked out changeset 6b096f9b31d3 (Bug 1152550) for bustage. - d20a4e36e508
Ryan VanderMeulenBug 1139591 - Skip browser_timeline_overview-initial-selection-01.js on OSX debug. a=test-only - c0624fb0b902
Ganesh SahukariBug 1022816 - OS.File will now be able to change the readOnly, hidden, and system file attributes on Windows. r=paolo, a=sledru - 8a2c933394da
Blake KaplanBug 1156939 - Don't stash a reference to a CPOW and then spin the event loop. r=mconley, a=test-only - 0efa961d5162
Jonas JenwaldBug 1112947 - Replace a setTimeout with an EventListener to fix an intermittent failure in browser/extensions/pdfjs/test/browser_pdfjs_navigation.js. r=mossop, a=test-only - b29a45098630
Jared WeinBug 1153403 - Don't allow dialogs to resize if they didn't resize in windowed preferences. r=Gijs, a=sledru - e46c9612492a
Matt WoodrowBug 1144257 - Blacklist DXVA for one NVIDIA driver that was causing crashes. r=ajones, a=sledru - 78c6b3ce2ce2
Tom SchusterBug 1152550 - Make sure that cross-global Iterator can not be broken. r=Waldo, a=sledru - 2025aa8c5b1b
travisBug 1154803 - Put our sigaction diversion in __sigaction if it exists. r=glandium, a=sledru - fd5c74651fb2
Neil RashbrookBug 968334 - Allow disabling content retargeting on child docshells only. r=smaug, ba=sledru - 38ff61772a2e
Nicolas B. PierronBug 1149119 - Use Atoms in the template object hold by Baseline. r=jandem, a=abillings - 7298f6e3943e
Nicolas B. PierronBug 1149119 - Do not inline bound functions with non-atomized arguments. r=jandem, a=abillings - 0e69c76cbbe2
Ryan VanderMeulenBacked out changeset b29a45098630 (Bug 1112947) for test bustage. - 8fc6195511e5
Rail AliievBug 1158760 - Wrong branding on the 38 Beta 8, backout d27c9211ebb3. IGNORE BROKEN CHANGESETS CLOSED TREE a=release ba=release - 9d105ed6f35a

http://release.mozilla.org/statistics/38/2015/04/29/fx-38-b6-to-b8.html


Mike Taylor: ReferenceError onTouchStart is not defined jquery.flexslider.js

Среда, 29 Апреля 2015 г. 08:00 + в цитатник

I was supposed to write this blog post like a year ago, but have been very busy in the last 12 months not writing this blog post. But yet here we are.

Doing some compatibility research on top Japanese sites, I ran into my old nemesis: ReferenceError: onTouchStart is not defined jquery.flexslider.js:397:12.

I first ran into this in January of 2014 in its more spooky form ReferenceError: g is not defined. Eventually I figured out it was a problem in a WooThemes jQuery plugin called FlexSlider, the real bug being the undefined behavior of function declaration hoisting in conditions (everyone just nod like that makes sense).

In JavaScript-is-his-co-pilot Juriy's words,

Another important trait of function declarations is that declaring them conditionally is non-standardized and varies across different environments. You should never rely on functions being declared conditionally and use function expressions instead.

In this case, they were conditionally declaring a function, but referencing it before said declaration in the if block, as an event handler, i.e.,

if (boop) {
  blah.addEventListener("touchstart", wowowow);

  function wowowow() {}
}

No big deal, easy fix. I wrote a patch. Some people manually patched their sites and moved on with their lives. I tried to.

We ran into this a few more times in Bugzilla and Webcompat.com land.

TC-39 bosom buddies Brendan and Allen noted that ES6 specifies things in such a way that these sites will eventually work in all ES62015 compliant browsers. Here's the bug to track that work in SpiderMonkey.

Cool! Until then, my lonely pull request is still hanging out at https://github.com/woothemes/FlexSlider/pull/986 (16 months later). The good news is FlexSlider is open source, so you're allowed to fix their bugs by manually applying that patch on your site. Then your touch-enabled slider widget stuff will work in Mobile Firefox browsers.

https://miketaylr.com/posts/2015/04/reference-error-ontouchstart-is-not-defined-jquery-flexslider-js-how-does-seo-work.html


Christian Heilmann: The new challenges of “open”

Среда, 29 Апреля 2015 г. 00:50 + в цитатник

These are my notes for my upcoming keynote at he Oscal conference in Tirana, Albania.

Today I want to talk about the new challenges of “open”. Open source, Creative Commons, and many other ideas of the past have become pretty much mainstream these days. It is cool to be open, it makes sense for a lot of companies to go that way. The issue is, that – as with anything fashionable and exciting – people are wont to jump on the bandwagon without playing the right tune. And this is one of the big challenges we’re facing.

Before we go into that, though, let’s recap the effects of going into the open with our work has.

Creating in the open is an empowering and frightening experience. The benefits are pretty obvious:

  • You share the load – people can help you with feedback, doing research for you, translating your work, building adapters to other environments for you.
  • You have a good chance your work will go on without you – as you shared, others can build upon your work when you moved on to other challenges; or got run over by a bus.
  • Sharing feels good – it’s a sort of altruism that doesn’t cost you any money and you see the immediate effect.
  • You become a part of something bigger – people will use your work in ways you probably never intended, and never thought of. Seeing this is incredibly exciting.

The downsides of working in the open are based on feedback and human interaction.

  • You’re under constant surveillance – you can’t hide things away when you openly share your work in progress. This can be a benefit as it means your product is higher quality when you’re under constant scrutiny. It can, however, also be stifling as you’re more worried about what people think about your work rather than what the work results in.
  • You have to allocate your time really well – feedback will come 24/7 and in many cases not in a format that is pleasing or – in some cases – even intelligible.
  • You have to pick your battles – people will come with all kind of requests and it is easy to get lost in pleasing the audience instead of finishing your product.
  • You have to prepare yourself for having to adhere to existing procedures – years of open source work resulted in many best practices and very outspoken people are quick to demand you adhere to them or stay off the lawn.

Hey cool kids, we’re doing the open thing!

One of the main issues with open is that people are not really aware of the amount of work it is. It is very fashionable to release products as open source. But, in many cases, this means putting the code on GitHub and hoping for a magical audience to help you and fix your problems. This is not how open source prospers.

Open Source and related ways of working does not mean you give out your work for free and leave it at that. It means that you make it available, that you nurture it and that you are open to giving up control for the benefit of the wisdom of the crowd. It is a two-way, three-way, many way exchange of data and information. You give something out, but you also get a lot back, and either deserves the same attention and respect.

More and more I find companies and individuals seeing open sourcing not as a way of working, but as an advertising and hiring exercise. Products get released openly but there is no infrastructure or people in place to deal with the overhead this means. It has become a ribbon to attach to your product – “also available on GitHub”.

We’ve been through that before – the mashup web and open APIs promised us developers that we can build great, scaling and wonderful products by using the power of the web. We pick and mix our content providers with open APIs and build our interfaces on top of that data. This died really quickly and today most APIs we played with are either shut down or became pay-to-play.

Other companies see “open” as a means to keep things alive that are not supported any longer. It’s like the mythical farm the family dog goes to when the kids ask where you take him when he gets old and sick. “Don’t worry, the product doesn’t mesh with the core business of our company any longer, but it will live on as it is now open source” is the message. And it is pretty much a useless one. We need products that are supported, maintaned and used. Simply giving stuff out for free doesn’t mean this will happen to that product, as it means a lot of work for the maintainers. In many cases shutting a product down is the more honest thing to do.

If you want to be open about it – do it our way

The other issue with open is that – ironically – open communities can come across as uninviting and aggressive. We are a passionate bunch, and care a lot about what we do. That can make us appear overly defensive and aggressive. Many long-standing open communities have methodologies in place to ensure quality that on first look can be daunting and off-putting.

Many companies understand the value of open, but are hesitant to open up their products because of this. The open community can come across as very demanding. And it is very easy to get an avalanche of bad feedback when you release something into the open but you fail to tick all the boxes. This is poison to anyone in a large company trying to release something closed into the open. You have to justify your work to the business people in the company. And if all you have to show is an avalanche of bad feedback and passive-aggressive “oh look, evilcorp is trying to play nice” comments, they are not going to be pleased with you.

We’re not competing with technology – we’re competing with marketing and tech propaganda

The biggest issue I see with open is that it has become a tool. Many of the closed environments that are in essence a replacement for the open web are powered by open technologies. This is what they are great for. The plumbing of the web runs on open. We’re a useful cog, and – to be fair – a lot of closed companies also support and give back to these products.

On the other hand, when you talk about a fully open product and try to bring it to end users, you are facing an uphill battle. Almost every open alternative to closed (or partly open systems) struggles or – if we are honest with ourselves – failed. Firefox OS is not taking the world by storm and brings connectivity to people who badly need it. The Ubuntu phone as an alternative didn’t cause a massive stir. Ello and Diaspora didn’t make a dent in the Facebooks and Twitters of this world. The Ouya game console ran into debt very quickly and now is looking to be bought out.

The reason is that we’re fooling ourselves when it comes to the current market and how it uses technology.

Longevity is dead

We love technology. We love the web. We love how it made us who we are and we celebrate the fights we fought to keep it open. We fight for freedom of choice, we fight for data retention and ownership of data and we worry where our data goes, if it will be available in the future or what happens with it.

But we are not our audience. Our audience are the digital natives. The people who see a computer, a smartphone and the internet as a given. The people who don’t even know what it means to be offline, and who watch streaming TV shows in bulk without a sense of dread at how much this costs or if it will work. If it stops working, who cares? Let’s do something else. If our phones or computers are broken, well let’s replace them. Or go to the shop and get them repaired for us. If the phone is too slow for the next version of its operating system, fine. Obviously we need to buy a better one.

The internet and technology has become a commodity, like running water and electricity. Of course, this is not the case all over the world, and in many cases also not when you’re traveling outside the country of your contracts. But, to those who never experienced this, it is nothing to worry about. Bit by bit, the web has become the new TV. Something people consume without knowing how it works or really taking part in it.

In England, where I live, it is almost impossible to get an internet connection without some digital TV deal as part of the package. The internet access is the thing we use to consume content provided to us by the same people who sold us CDs, DVDs, and BluRays. And those who consume over the internet also fill it up with content taken from this source material. Real creativity on the web, writing and publishing is on the way out. When something is always available, you stop caring for it. It is simply a given.

Closed by design, consumable by nature

This really scares me. It means that the people who always fought the open web and the free nature of software have won. Not by better solutions or by more choice. But by offering convenience. We’ve allowed companies with better marketing than us to take over and tell people that by staying in their world, everything is easy and works magically. People trade freedom of choice and ownership of their information for convenience. And that is hard to beat. When everything works, why put effort in?

The dawn of this was the format of the app. It was a genius idea to make software a consumable, perishable product. We moved away from desktop apps to web based apps a long time ago. Email, Calendaring, even document handling has gone that way and Google showed how that can be done.

With the smartphone revolution and the lack of support for open technologies in the leading platform the app was re-born: a bespoke piece of software written for a single platform in a closed format that needs OS-specific skills and tools to create. For end users, it is an icon. It works well, it looks amazing and it ties in perfectly with the OS. Which is no surprise, as it is written exclusively for it.

Consumable, perishable products are easier to sell. That’s why the market latched on to this quickly and defined it as the new, modern way to create software.

Even worse, instead of pointing out the lack of support for interoperable and standardised technology in the operating systems of smart devices, the tech press blamed said technologies for not working on them as well as the native counterparts do.

Develop where the people are

This constant re-inforcement of closed as a good business and open as not ready and hard to do has become a thing in the development world. Most products these days are not created for the web, independent of OS or platform. The first port of call is iOS, and once its been a success there, maybe Android. But only after complaining that the fragmentation is making it impossible to work. Fragmentation that has always been a given in the open world.

A fool’s errand

It seems open has lost. It has, to a degree. But there are already signs that what’s happening is not going to last. People are getting tired of apps and being constantly reminded by them to do things for them. People are getting bored of putting content in a system that doesn’t keep them excited and jump from product to product almost monthly. The big move of almost every platform towards light-weight messaging systems instead of life streams shows that there is a desperate attempt to keep people interested.

The big market people aim for is teenagers. They have a lot of time, they create a lot of interactions and they have their parent’s money to spend if they nag long enough.

The fallacy here is that many companies think that the teenagers of now will be the users of their products in the future. When I remember what I was like as a teenager, there is a small chance that this will happen.

We’re in a bubble and it is pretty much ready to burst. When the dust settles and people start wondering how anyone could be foolish enough to spend billions on dollars on companies that promise profits and pivot every few months when it didn’t come we’ll still be there. Much like we were during the first dotcom boom.

We’re here to help!

And this is what I want to close with. It looks dire for the open web and for open technologies right now. Yes, a lot is happening, but a lot is lip-service and many of the “open solutions” are trojan horses trying to lock people into a certain service infrastructure.

And this is where I need you. The open source and open in general enthusiasts. Our job now is to show that what we do works. That what we do matters. And that what we do will not only deliver now, but also in the future.

We do this by being open. By helping people to move from closed to open. Let’s be a guiding mentor, let’s push gently instead of getting up in arms when something is not 100% open. Let’s show that open means that you build for the users and the creators of now and of tomorrow – regardless of what is fashionable or shiny.

We have to move with the evolution of computing much like anybody else. And we do it by merging with the closed, not by trying to replace it. This failed and will also in the future. We’re not here to create consumables. We’re here to make sure they are made from great, sustainable and healthy parts.

http://christianheilmann.com/2015/04/28/the-new-challenges-of-open/


Air Mozilla: The Well Tempered API

Вторник, 28 Апреля 2015 г. 23:00 + в цитатник

The Well Tempered API Centuries ago, a revolution in music enabled compositions to still be playable hundreds of years later. How long will your software last? This talk, originally...

https://air.mozilla.org/the-well-tempered-api-2/


Kim Moir: Releng 2015 program now available

Вторник, 28 Апреля 2015 г. 22:54 + в цитатник
Releng 2015 will take place in concert with ICSE in Florence, Italy on May 19, 2015. The program is now available. Register here!

via romana in firenze by ©pinomoscato, Creative Commons by-nc-sa 2.0



http://relengofthenerds.blogspot.com/2015/04/releng-2015-program-now-available.html


Kim Moir: Less testing, same great Firefox taste!

Вторник, 28 Апреля 2015 г. 21:47 + в цитатник

Running a large continuous integration farm forces you to deal with many dynamic inputs coupled with capacity constraints. The number of pushes increase.  People add more tests.  We build and test on a new platform.  If the number of machines available remains static, the computing time associated with a single push will increase.  You can scale this for platforms that you build and test in the cloud (for us - Linux and Android on emulators), but this costs more money.  Adding hardware for other platforms such as Mac and Windows in data centres is also costly and time consuming.

Do we really need to run every test on every commit? If not, which tests should be run?  How often do they need to be run in order to catch regressions in a timely manner (i.e. able to bisect where the regression occurred)


Several months ago, jmaher and vaibhav1994, wrote code to analyze the test data and determine the minimum number of tests required to run to identify regressions.  They named their software SETA (search for extraneous test automation). They used historical data to determine the minimum set of tests that needed to be run to catch historical regressions.  Previously, we coalesced tests on a number of platforms to mitigate too many jobs being queued for too few machines.  However, this was not the best way to proceed because it reduced the number of times we ran all tests, not just less useful ones.  SETA allows us to run a subset of tests on every commit that historically have caught regressions.  We still run all the test suites, but at a specified interval. 

SETI – The Search for Extraterrestrial Intelligence by ©encouragement, Creative Commons by-nc-sa 2.0
In the last few weeks, I've implemented SETA scheduling in our our buildbot configs to use the data that the analysis that Vaibhav and Joel  implemented.  Currently, it's implemented on mozilla-inbound and fx-team branches which in aggregate represent around 19.6% (March 2015 data) of total pushes to the trees.  The platforms configured to run fewer pushes for both opt and debug are
  • MacOSX (10.6, 10.10)
  • Windows (XP, 7, 8)
  • Ubuntu 12.04 for linux32, linux64 and ASAN x64
  • Android 2.3 armv7 API 9

As we gather more SETA data for newer platforms, such as Android 4.3, we can implement SETA scheduling for it as well and reduce our test load.  We continue to run the full suite of tests on all platforms other branches other than m-i and fx-team, such as mozilla-central, try, and the beta and release branches. If we did miss a regression by reducing the tests, it would appear on other branches mozilla-central. We will continue to update our configs to incorporate SETA data as it changes.

How does SETA scheduling work?
We specify the tests that we would like to run on a reduced schedule in our buildbot configs.  For instance, this specifies that we would like to run these debug tests on every 10th commit or if we reach a timeout of 5400 seconds between tests.

http://hg.mozilla.org/build/buildbot-configs/file/2d9e77a87dfa/mozilla-tests/config_seta.py#l692


Previously, catlee had implemented a scheduling in buildbot that allowed us to coallesce jobs on a certain branch and platform using EveryNthScheduler.  However, as it was originally implemented, it didn't allow us to specify tests to skip, such as mochitest-3 debug on MacOSX 10.10 on mozilla-inbound.  It would only allow us to skip all the debug or opt tests for a certain platform and branch.

I modified misc.py to parse the configs and create a dictionary for each test specifying the interval at which the test should be skipped and the timeout interval.  If the tests has these parameters specified, it should be scheduled using the  EveryNthScheduler instead of the default scheduler.

http://hg.mozilla.org/build/buildbotcustom/file/728dc76b5ad0/misc.py#l2727
There are still some quirks to work out but I think it is working out well so far. I'll have some graphs in a future post on how this reduced our test load. 

Further reading
Joel Maher: SETA – Search for Extraneous Test Automation
A-Team Contributions in 2015 
Use SETA data to disable unneeded tests (To do - update configs dynamically)
Enable SETA for Android in buildbot scheduling, configs

http://relengofthenerds.blogspot.com/2015/04/less-testing-same-great-firefox-taste.html


Tanner Filip: Do you host a wiki for your community? Community Ops wants to hear from you!

Вторник, 28 Апреля 2015 г. 21:40 + в цитатник

I'm cross-posting this to my blog, I'm hoping to get as much feedback as possible.

If you are hosting a wiki for your community rather than using wiki.mozilla.org, Community Ops has a few questions for you. If you would be so kind to reply to my post on Discourse, answering the questions I have below, we'd be extremely appreciative.

  1. How did you decide that you need a wiki?
  2. Why did you decide to host your own, rather than using the Mozilla Wiki?
  3. How did you choose your Wiki software (MediaWiki, TikiWiki, etc.)?
  4. What could make your wiki better? For example, would you like any extensions, or technical support?

Thank you in advance for taking the time to answer these questions!

http://tannerfilip.me/do-you-host-a-wiki-for-your-community-community-ops-wants-to-hear-from-you/


Gervase Markham: HSBC: Bad Security

Вторник, 28 Апреля 2015 г. 19:18 + в цитатник

I would like to use a stronger word than “bad” in the title, but decency forbids.

HSBC has, or used to have, a compulsory 2-factor system for logging in to their online banking. It used a small widget called a Secure Key. This is good. Now, they have rolled out an Android/iOS/Blackberry app alternative. This is also good, on balance.

However, at the same time, they have instituted a system where you can log on and see all your banking information and even take some actions without the key, just using a password. This is bad. Can I opt out, and say “no, I’d like to always use the key, please?” No, it seems I can’t. Compulsory lowered security for me. Even if I don’t use the password, that login mechanism will always be there.

OK, so I go to set a password. Never mind, I think, I’ll pick something long and complicated. But no; the guidance says:

Your password is not case sensitive and must be between 8 and 30 characters. It must include letters and numbers.

So the initial passphrase I picked was both too long, and didn’t include a number. However, the only error it gives is “This data is invalid”. I tried several other variants of my thought-of passphrase, but couldn’t get it to accept it. Painful reverse-engineering showed that the space character is also forbidden. Thank you so much, HSBC.

I finally find a password it’ll accept and click “Continue”. But, no. “Your session is invalidated – please log in again.” It’s taken so long to find a password it’ll accept that it has timed me out.

http://feedproxy.google.com/~r/HackingForChrist/~3/-XttXXAxcaw/


Air Mozilla: Martes mozilleros

Вторник, 28 Апреля 2015 г. 18:00 + в цитатник

Martes mozilleros Reuni'on bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

https://air.mozilla.org/martes-mozilleros-20150428/


Adam Lofting: Optimizing for Growth

Вторник, 28 Апреля 2015 г. 16:27 + в цитатник

In my last post I spent some time talking about why we care about measuring retention rates, and tried to make the case that retention rate works as a meaningful measure of quality.

In this post I want to look at how a few key metrics for a product, business or service stack up when you combine them. This is an exercise for people who haven’t spent time thinking about these numbers before.

  • Traffic
  • Conversion
  • Retention
  • Referrals

If you’re used to thinking about product metrics, this won’t be new to you.

I built a simple tool to support this exercise. It’s not perfect, but in the spirit of ‘perfect is the enemy of good‘ I’ll share it in it’s current state.

>> Follow this link, and play with the numbers.

Optimizing for growth isn’t just ‘pouring’ bigger numbers into the top of the  ‘funnel‘. You need to get the right mix of results across all of these variables. And if your results for any of these measurable things are too low, your product will have a ‘ceiling’ for how many active users you can have at a single time.

However, if you succeed in optimizing your product or service against all four of these points you can find the kind of growth curve that the start-up world chases after every day. The referrals part in particular is important if you want to turn the ‘funnel’ into a ‘loop’.

Depending on your situation, improving each of these things has varying degrees of difficulty. But importantly they can all be measured, and as you make changes to the thing you are building you can see how your changes impact on each of these metrics. These are things you can optimize for.

But while you can optimize for these things, that doesn’t make it easy.

It still comes down to building things of real value and quality, and helping the right people find those things. And while there are tactics to tweak performance rates against each of these goals, the tactics alone won’t matter without the product being good too.

As an example, Dropbox increased their referral rate by rewarding users with extra storage space for referring their friends. But that tactic only works if people like Dropbox enough to (a) want extra storage space and (b) feel happy recommending the product to their friends.

In summary:

  • Build things of quality
  • Optimize them against these measurable goals

http://feedproxy.google.com/~r/adamlofting/blog/~3/WmxkDnbCEfg/



Поиск сообщений в rss_planet_mozilla
Страницы: 472 ... 151 150 [149] 148 147 ..
.. 1 Календарь