-Поиск по дневнику

Поиск сообщений в rss_planet_mozilla

 -Подписка по e-mail

 

 -Постоянные читатели

 -Статистика

Статистика LiveInternet.ru: показано количество хитов и посетителей
Создан: 19.06.2007
Записей:
Комментариев:
Написано: 7

Planet Mozilla





Planet Mozilla - https://planet.mozilla.org/


Добавить любой RSS - источник (включая журнал LiveJournal) в свою ленту друзей вы можете на странице синдикации.

Исходная информация - http://planet.mozilla.org/.
Данный дневник сформирован из открытого RSS-источника по адресу http://planet.mozilla.org/rss20.xml, и дополняется в соответствии с дополнением данного источника. Он может не соответствовать содержимому оригинальной страницы. Трансляция создана автоматически по запросу читателей этой RSS ленты.
По всем вопросам о работе данного сервиса обращаться со страницы контактной информации.

[Обновить трансляцию]

Michael Verdi: Oberbaum Bridge in VR

Среда, 24 Февраля 2016 г. 18:03 + в цитатник

I’ve been testing out the Ricoh Theta S camera and I happen to be in Berlin for work so here’s a photo from the Oberbaum Bridge yesterday (click and drag to look around).

You can also put it VR mode vr (thanks to A-Frame).

And here’s a video from the same spot.

YouTube Link. Check it out in the YouTube app on iOS or with the YouTube app and a Cardboard viewer on Android.

http://x627.com/oberbaum-bridge-in-vr/


The Mozilla Blog: Continuing the Conversation About Encryption and Apple: A New Video From Mozilla

Среда, 24 Февраля 2016 г. 17:23 + в цитатник

In the past week, the conversation about encryption has reached fever pitch. Encryption, Apple, and the FBI are in headlines around the world. And lively discussions about security and privacy are taking place around kitchen tables, on television, and in comment sections across the Internet.

Mozilla believes the U.S. government’s demand for Apple to circumvent their own security protections is a massive overreach. To require Apple to do this would set a dangerous precedent that threatens consumer security going forward. But this discussion is an opportunity to broaden public understanding of encryption. When people understand the role encryption plays in their everyday lives, we can all stand up for encryption when threats surface — this key issue related to the overall health of the Internet becomes mainstream.

Earlier this month — just days before the Apple story broke — Mozilla launched a public education campaign about encryption. We’re excited to continue this campaign alongside the new, robust conversations that have emerged.

Today, Mozilla is releasing the next installment in the campaign: a short film that animates encryption as a lovable character and unpacks how she works and why she’s so important.

We hope you’ll share this video with your friends and family — and then start a conversation about the issues that have come to the fore over the past week. Building grassroots support for a safe and open Internet is essential. It’s a tried and true tactic: kitchen table conversations and support from everyday Internet users helped uphold net neutrality. This is the power of the open Internet movement at work. Now, it’s time to do it again — let’s spread the word about encryption and help keep it safe.

https://blog.mozilla.org/blog/2016/02/24/continuing-the-conversation-about-encryption-and-apple-a-new-video-from-mozilla/


Jet Villegas: Protected: An update on Mozilla’s Shumway project

Среда, 24 Февраля 2016 г. 10:42 + в цитатник

This content is password protected. To view it please enter your password below:

http://junglecode.net/an-update-on-mozillas-shumway-project/


Robert O'Callahan: "These Bugs Are Impossible To Fix Without rr"

Среда, 24 Февраля 2016 г. 04:41 + в цитатник

Jan de Mooij and the Mozilla JS team just used rr to figure out a GC-related top-crash in Firefox Nightly. The IRC discussion starts here.

I think rr is great for most kinds of bugs, not just the really hard ones. But obviously hard bugs are where there's the most win to be had.

http://robert.ocallahan.org/2016/02/these-bugs-are-impossible-to-fix.html


Emma Irwin: Flipping the open source contribution model

Вторник, 23 Февраля 2016 г. 19:16 + в цитатник

The Flipped Contribution model is one that removes the project as the center of participation design and instead focuses on developing a strong, skill-set-specific, contributor-led community serving multiple projects.

They’re building the opportunity for projects to get involved with them. They’re building the community they want to see in the world.

Read my full post on Opensource.com

 

Image by cordyceps, CC BY 2.0

FacebookTwitterGoogle+Share

http://tiptoes.ca/flipping-the-open-source-contribution-model/


Robert O'Callahan: Deeper Into Chaos

Вторник, 23 Февраля 2016 г. 13:20 + в цитатник

A couple of weeks ago we introduced rr chaos mode. Since then quite a few people have tried it out, with a lot of success and some failures. Studying one bug that chaos mode could not reproduce led to some insights that have helped me make some improvements that are now on rr master. The main insight is that while some bugs need a thread to be starved for a long time, other bugs only need a thread to be starved for a short time, but the starvation must happen in a very narrow window of vulnerability. So for these latter bugs, we should introduce very short delays, but we should introduce them very frequently.

Taking a step back, let's assume that for some test we can reproduce a failure if we completely avoid scheduling a certain thread for a period of D seconds, where the start of that period must fall between S and S+T seconds since the start of the test. All these constants are unknown to rr, but we'll assume 1ms <= D <= 2s. Our goal is to come up with a randomized strategy for introducing delays that will reproduce the failure within a reasonable number of runs. Since we only need to reproduce any particular bug once, it would be best to have roughly similar probabilities for reproducing each bug given its unknown parameters (rather than have some bugs very easy to reproduce and other bugs be nearly impossible). I have no idea of the optimal approach here, but here's one that seems reasonable...

First we have to pick the right thread to treat as low priority --- without making many other threads low priority, since they might need to run while our victim thread is being starved. So we give each thread a 0.1 probability of being low priority, except for the main thread which we make 0.3, since starving the main thread is often very interesting.

Then we guess a value D' for D. We uniformly choose between 1ms, 2ms, 4ms, 8ms, ..., 1s, 2s. Out of these 12 possibilities, one is between D and 2xD.

We adopt the goal of high-priority-only intervals consuming at most 20% of running time. (If chaos mode slows down a test too much, we might trigger false-positives by inducing timeouts, and avoiding false positives is extremely important.) To maximise the probability of triggering the test failure, we start high-priority-only intervals as often as possible, i.e. one for D' seconds starting every 5xD' seconds. The start time of the first interval is chosen uniformly randomly to be between 0 and 4xD'.

If we guessed D' and the low-priority thread correctly, the probability of triggering the test failure is 1 if T >= 4xD', T/4xD' otherwise, i.e. >= T/8xD. (Higher values of D' than optimal can also trigger failures, but at reduced probabilities since we can schedule them less often.)

I've written some artificial testcases showing that this works. I've also been able to reproduce the motivating bug from the first paragraph. I think it's a clear improvement over the previous approach. No doubt as more bugs come up that we can't reproduce, we can make further improvements.

http://robert.ocallahan.org/2016/02/deeper-into-chaos.html


Karl Dubost: `appearance: button` in CSS and its implementations

Вторник, 23 Февраля 2016 г. 06:39 + в цитатник

The ponpare Web site for mobile devices has a good rendering in Blink and WebKit browsers.

Ponpare screenshot

But when it comes to Firefox on Android. Ooops!

Ponpare menu screenshot

I'll pass on the white background which is another issue, and will focus exclusively on the double arrow for the button which is very cool when you want to kill two birds at the same time, but less so for users. What is happening?

Exploring with the Firefox Devtools, we can find the piece of CSS in charge of styling the select element.

#areaNavi select {
    padding-right: 10px;
    margin-right: 0;
    -webkit-appearance: button;
    -moz-appearance: button;
    appearance: button;
    height: 32px;
    line-height: 32px;
    text-indent: 0px;
    border: 0px;
    background: url("/iphone/img/icon_sort.png") no-repeat scroll 100% 50% transparent !important;
    background-size: 7px 9px !important;
    width: 80px !important;
    color: #FFF;
    font-weight: bold;
    font-size: 11px;
}

Everything seems fine at first sight. The developers have put two vendor prefixes (-webkit- and -moz-… no -ms- ?) and the "standard" property.

Standard appearance ?

OK. Let's see a bit. Let's take this very simple code:


    span> value="0">First option

On MacOSX these are the default rendering for the different browsers :

  • Firefox Gecko (47) select rendering with the user agent default being -moz-appearance: menulist;
  • Opera Blink (36) select rendering with the user agent default being -webkit-appearance: menulist;
  • Safari WebKit (9) select rendering with the user agent default being -webkit-appearance: menulist;

Let's add a styling with an appearance: button and reloads



    span> value="0">First option

It doesn't change anything because no browsers have implemented it. Firefox sends in the console a Unknown property 'appearance'. Declaration dropped. Let's check if there is a specification. The CSS4 draft about appearance defines only two values: auto and none. So it's quite normal a standard property doesn't send back anything.

Vendor prefixes appearance ?

We modify our simple piece of HTML code



    span> value="0">First option

We start to see interesting things happening.

  • Firefox Gecko (47) select rendering. It has more padding, rounded corners and white background. Though make it bigger with the zoom and it becomes square with a grey gradient.
  • Opera Blink (36) select rendering. It is square with a gradient in the background. But make it bigger and it gets rounded corners with a thick border and no gradients?
  • Safari WebKit (9) select rendering. It has no padding, rounded corners and white background. Make it bigger it gets a grey gradients and thin border but butchered rounded corners.

But let's say that the zoom feature is really a detail… of implementations.

The designer made the select list a button… how does the designer now says to the world: "Hey in fact, I know, I know, I made it a button but what I meant was a menu item list".

background-color for select and its magical effect

Plain buttons are boring. The very logical thing to do is to add a background-color… We modify again our HTML code.



    span> value="0">First option

And we get… drums!

  • Firefox Gecko (47) select rendering. What? Adding a background-color modifies the rendering of the button and makes it appear like a -moz-appearance: listitem; ?
  • Opera Blink (36) select rendering.
  • Safari WebKit (9) select rendering.

At least Blink and WebKit just take a background-color.

How do I know it is a drop-down menu?

So the usual answer from the Web designer who was not satisfied with the default rendering of select and made it a button… is to add an image in the background of the button to say: "yeah yeah I know this is in fact a drop-down menu".



    span> value="0">First option

We finally get this:

  • Firefox Gecko (47) select rendering.
  • Opera Blink (36) select rendering.
  • Safari WebKit (9) select rendering.

Now, if your goal was to have the white arrow down in Firefox, you would need to do:

    -moz-appearance: none;

At the beginning I thought there was something crazy going on with WebKit based browsers, but there might be a gecko bug too. I added comment on a webcompat issue anyway. We need to dig a bit more into this.

Otsukare!

http://www.otsukare.info/2016/02/23/appearance-button-css


Benoit Girard: Using RecordReplay to investigate intermittent oranges, bug #2 part 2

Вторник, 23 Февраля 2016 г. 05:21 + в цитатник

As promised here’s the follow up to part 1.

Getting Lost in a Replay

In part 1 I made a mistake. I accidentally got lost in the replay and started debugging from the wrong point. I believe I might of started with the wrong event number.

The best way to make sure you’re not getting lost in a replay is to use the ‘when’ command to print the current timeline event id that you’re at. With this event ID you can compare it against the stdout event marks and make sure you are at the point in the replay that you expect. Because of issue #1653 it wasn’t possible to use ‘when’ without risking a crash so I wasn’t able to see that I was lost. However this is now fixed on trunk! This was fixed today and I was able to use it from time to time and make sure I was where I wanted to be.

On this note I think this RR needs a feature to make it easier to stay within a certain bounds. It’s easy when doing continue/reverse continue to jump past the reftest that you’re interested in and suddenly you’re debugging in something unrelated.

Restarting from the Display List

This time the display list was reporting that both the good and the bad frame had the correct Image Rotation. However the frame size was wrong.

I had originally assumed that the image wasn’t rotated because of the proportion it had. However on a closer look the image was in fact rotated properly but it was stretched out in the opposite direction giving the impression that the image was not rotated.

With this information it was now clear that the rotation was correct but the image frame size was not.

Building a Timeline and Exploiting Checkpoints

It turns out that there’s a few interesting events for how the nsImageFrame is sized. There’s ‘nsLayoutUtils::OrientImage’, ‘nsImageFrame::Reflow’, ‘nsImageFrame::GetIntrinsicSize’ and ‘nsImageFrame::UpdateIntrinsicSize’.

Now since we’re suspected a race conditions it’s likely that these things are called in a bad sequence in the ‘bad’ trace. Turns out that guess was correct.

You can build a timeline manually using ‘when’ and ‘when-ticks’ and keeping notes however instead we exploited the checkpoint feature which also records the ‘when’. This keeps track of the events that you care about, ‘when’ they occurs and also makes it easy to jump to back to these key moments. Once we’ve created checkpoints at the interesting location we could can call ‘info checkpoint’ and read off the call sequence by manually sorting on the ‘when’. This keeps better notes and makes it easier to jump to relevant points. We ended up doing a lot of back and forth with these checkpoints trying to understand the timeline differences. Here’s the good trace:

(rr) info checkpoint
ID    When    Where
1    646393      PresShell::RenderDocument (this=0x3c831e67f000, aRect=..., aFlags=12, aBackgroundColor=4294967295, aThebesContext=0x2f701d8a5280) at /home/bgirard/mozilla-central/tree/layout/base/nsPresShell.cpp:4497
2    646484      nsDisplayList::PaintRoot (this=0x7ffede3eaf50, aBuilder=0x7ffede3eb180, aCtx=0x7ffede3eb970, aFlags=1) at /home/bgirard/mozilla-central/tree/layout/base/nsDisplayList.cpp:1565
3    644554      nsLayoutUtils::OrientImage (aContainer=0xf3568bed200, aOrientation=...) at /home/bgirard/mozilla-central/tree/layout/base/nsLayoutUtils.cpp:6702
4    644598      0x00003c831654ff9e in nsImageFrame::Reflow (this=0x20940978ad38, aPresContext=0x3c831e512000, aMetrics=..., aReflowState=..., aStatus=@0x7ffede3f10b4: 0) at /home/bgirard/mozilla-central/tree/layout/generic/nsImageFrame.cpp:959
6    644566      nsImageFrame::GetIntrinsicSize (this=0x20940978ad38) at /home/bgirard/mozilla-central/tree/layout/generic/nsImageFrame.cpp:919
7    644554      nsImageFrame::UpdateIntrinsicSize (this=0x20940978ad38, aImage=0xf3568b17070) at /home/bgirard/mozilla-central/tree/layout/generic/nsImageFrame.cpp:289

He we can tell that the ‘good’ call sequence is UpdateIntrinsicSize, GetIntrinsicSize, Reflow. Doing the same for the ‘bad’ replay gives us: GetIntrinsicSize, UpdateIntrinsicSize, Reflow. Note how we call UpdateIntrinsicSize with the right value but never query them. That’s bad!

At some point I think RR could use some UI improvements to make it easier to build timelines (and deal with ‘when’ ties) but not today.

Testing Timeline Theories

Now at this point we had several theories about the timeline. Theories were tested mostly using reverse-continue to get a better idea of the timeline differences and a few reverse-continue with a memory watchpoint (watch -l).

The Bug at Last

Turns out that the bug is pretty complicated. There’s a race condition in which we fire the ‘load’ event on the ImageDocument. At this point we’ve actually fully decoded the image and have the correct size and rotation for the image. However in the ‘bad’ trace we’re missing something very important. We haven’t built the proper StyleVisibility for the image frame which is missing an important ‘image-orientation’ property. Without this CSS property we will ignore the image rotation when computing the size ImageDocument even though it’s set correctly and on time. We also have no hooks to invalidate the size of the ImageDocument when the CSS propertie finally comes in so in this race condition the bad value sticks forever. We will get the correct image rotation value when we paint and thus the image will be rotated correctly but since the ImageDocument has the wrong size it will be stretched out vertically.

The Fix

Turns out that we can avoid sizing the image frame entirely and just use CSS to implement the shrinkToFit while preserving the aspect ratio. The fix was to use ‘object-fit: contain;’ instead of trying to sync up the width/height of ImageDocument, nsImageFrame and nsIDOMHTMLImageElement as the CSS styles are changing. Using CSS to implement the shrinkToFit behavior is much more robust.

The good news is fixing this wasn’t just fixing an infrastructure race. The bad behavior can actually be seen manually by toggling ‘image-orientation’ in an image document with exif rotation. This intermittent was warning us about a real problem.


https://benoitgirard.wordpress.com/2016/02/23/using-recordreplay-to-investigate-intermittent-oranges-bug-2-part-2/


Robert O'Callahan: Rewrite Everything In Rust

Вторник, 23 Февраля 2016 г. 04:00 + в цитатник

I just read Dan Kaminsky's post about the glibc DNS vulnerability and its terrifying implications. Unfortunately it's just one of many, many, many critical software vulnerabilities that have made computer security a joke.

It's no secret that we have the technology to prevent most of these bugs. We have programming languages that practically guarantee important classes of bugs don't happen. The problem is that so much of our software doesn't use these languages. Until recently, there were good excuses for that; "safe" programming languages have generally been unsuitable for systems programming because they don't give you complete control over resources, and they require complex runtime support that doesn't fit in certain contexts (e.g. kernels).

Rust is changing all that. We now have a language with desirable safety properties that offers the control you need for systems programming and does not impose a runtime. Its growing community shows that people enjoy programming in Rust. Servo shows that large, complex Rust applications can perform well.

For the good of the Internet, and in fact humanity, we need to migrate our software from C/C++ to Rust (or something better) as quickly as possible. Here are some steps we need to take:

  • Use Rust. Using Rust instead of C/C++ makes code safer. Using Rust instead of any other language grows the Rust community, making it easier for other people to use Rust.
  • Rewrite code in Rust. Starting with our most critical infrastructure, rewrite C/C++ code to use Rust instead.
  • Extend Rust's guarantees. We can extend the classes of bugs that Rust prevents. For example, we should try to make Rust release builds check for integer overflow by default.
  • Verify "unsafe" Rust code. Sometimes the Rust type system is not strong enough to let you prove to the compiler that your code is safe, so you have to mark code blocks "unsafe". With tool support, you could instead generate a mathematical proof that your code is safe, checked by the compiler. (See Rustbelt.)
  • Make Rust implementations safer. Bugs in the compiler can invalidate the language's guarantees. We know how to build compilers that generate, along with the compiled code, a proof that the code maintains the language guarantees --- a proof that can be checked by a simple, trusted proof checker. Part of this would involve proving properties of the Rust language itself.

Of course, the language doesn't have to be Rust, but Rust is the best candidate I know of at this time.

This is a huge amount of work, but consider the enormous ongoing costs --- direct and indirect --- of the vulnerabilities that Rust would have prevented.

http://robert.ocallahan.org/2016/02/rewrite-everything-in-rust.html


Nikki Bee: Programming For Servo: Experience And Knowledge Gained

Понедельник, 22 Февраля 2016 г. 23:45 + в цитатник

Talking About Learning

So far I’ve used my Outreachy blog to talk about how I got started on Outreachy, what the start of my work for Servo was like, and some sketches of what most of my experiences have been. Today I’m going to go more in depth, by talking about what code review has been like for me, and some programming tidbits I’ve learned during my internship.

I assume many of the programmers reading this have had plenty of experience with code review already, but the amount of it in contributing to Servo is new to me, and I’m interested in talking about my experience! As for things I’ve learned- I only cover a couple of things, but that doesn’t mean that’s the total of everything I’ve gotten out of my internship. I just have a hard time breaking down ideas that wouldn’t take me too long to figure out how to teach in a blog post.

Mentoring

The majority of my programming experience is from projects I chose to do on my own: with my own goals, and answering to myself. It’s often quite difficult to think of tasks that I’m interested in doing that also help me practice a new technique or a concept that I’m still rusty with. Plus, I’m not often the best judge of how “good” my code style, functionality, or understanding of concepts used is (although, all of those can be very subjective!). I occasionally compare my latest projects to earlier code and reflect on what has changed in my approach and knowledge, which is great for seeing how much I’ve improved, but doesn’t always tell me what I need to learn next.

My Outreachy internship has by far been my best experience for having constant goals and other people to answer to. My relationship with the Servo project manager has always described him as a mentor to me, and I think that’s the most fitting title. If I referred to him as my boss, I think it would likely give an inaccurate or incomplete understanding of how big of an impact he’s directly had on my performance.

Having a mentor means having somebody I can directly go to with any questions I have, and also somebody who knows very well what I’m working on; any time I think I’ve hit a wall, my mentor has a solution, or knows where to start finding a solution, if nobody else available can figure it out! This all applies heavily to the feedback I receive on my work.

Reviews

Servo, like most any sizable programming team I reckon, has all changes to Servo peer reviewed by at least one other person before it’s accepted. Now, I’ve of course had my code for assignments in university looked over by teachers, but the feedback was always broad and impersonal, since there are so many students in every class. I’ve also had several code review sessions with facilitators at Recurse Center, but it was never a complete line-by-line read-through of code I’d written, since that would have taken too much time for both of us to do together.

In contrast to either of those, everything I write for Servo receives in-depth review and feedback, something I can’t receive in a normal classroom setting or from occasional review sessions. This has been fantastic for me, both as a newbie to Rust and a growing programmer in general. If I misunderstand a feature of Rust, or lack confidence in trying out a new (for me) feature that would make for more effective code, other people (including my mentor of course) suggest to me what I can do better. If I don’t understand the suggestion, or if I think it’s not helping solve the initial problem, I can talk about that with the reviewer or other Servo contributors until a resolution is found.

Through all of this review and discussion, my mental understanding of Rust grows every day, helping me tackle more difficult problems (my final large task is going to deal with making the Fetch implementation asynchronous!) while making fewer mistakes. Likewise, my personal tool belt of ways to approach complex programming has grown considerably thanks to all the unique problems I’ve handled. I’m going to spend the rest of the post detailing two such of these.

Two Programming Examples

Working on Servo has given me ample room to learn new concepts and what their applications are, as well as to further the knowledge of resources I already had. An example of each would be how I’ve learned to use a new (for me) abstract data representation, and how much better I now understand what pointers mean.

A Bad Use Case For Strings

For a long time, whenever I made a program that used settings, I’d store them in a string. This example is actually from my last large Python project, just by the way. My code would have lines like database = "local". Then, when I need to care about the database value, I’d compare it to one of the possible settings, such as "local", or "global". There are quite a few problems with this approach, although for me it wasn’t obvious how bad it was, until I knew better!

What Makes This Difficult

1) A typo is a silent error. If I ever make a typo in setting database or the string I compare it to, that typo won’t be easily caught while compiling or running the program. Setting database to "lpcal" or "Local" means it won’t match "local", through no fault of how strings work. Likewise, I could have it spelled correctly, but have a mismatch of case where I try to compare it to "LOCAL” by mistake. I could always convert both database and the value it’s being compared to to lowercase or uppercase to prevent that error, but that’s a tad annoying to do every time, plus that still doesn’t resolve spelling typos.

2) I have to memorize what all the possible values are. In this instance, database has to be one of just "local" or "global". What if, like a variable I use for storing commands in the same program, I have more than two possibilities? Five? A dozen? By that point, I should write down every valid value somewhere. That means I have to keep track of a non-code file, which isn’t actually important to my project. And even with such a solution, typos are still possible while copying listed values.

3) Writing tests for database is tedious. If I write tests to make sure the value is always "local" or "global", I might make a typo in the test itself, making it raise false alarms, or accepting an instance of the same typo. While not impossible to test properly, it would take considerate attention to detail since I often, without meaning to, correct minor typos in my head as I read. For this project, despite any tests I made, I certainly had a few frustrating to track down bugs where one of the values wasn’t being set properly.

An Alternative

What’s a good solution to all of these? Enums! Enums are the perfect replacement for what I’m trying to make strings do. I’ll respond to each problem I listed above with how using enums solves it. If you don’t know what an enum is, I don’t think I’d be the best teacher for it right here- but you can take a look at how the C language does it, which is where I first learned about them.

What Makes This Work

1) Enums don’t have silent errors for typos. If a typo for an enum value is made, it should either A) raise a compiler error (such as in Rust), or B) cause an exception while running the program because the value doesn’t exist (such as a Python implementation might do it). If I made database an enum with values local and global, and declared an instance of it as locol, an error will be raised when that line is compiled or run, pinpointing it as an invalid enum. No longer do I have to hunt down the source of a silent error!

2) I don’t need to memorize the values. As the enum is declared in the code, I can always look at that to know what the values are. Since point #1 is solved, if I don’t remember a value correctly and try to guess, it will raise an error and point me towards the right direction. Also, this is dependent on the (often language-specific) IDE or other coding tool used, but it’s possible to have all valid values for an enum shown to me as I type, since IDEs can find connections across an entire project.

3) Tests become significantly less stressful. I do not need to worry about typos in my tests, since checking an enum value means referring to that enum. I don’t have to test that an enum instance always uses any valid value, since the enum type does that for me. I can focus on ensuring that the enum instance is set to the correct value at certain points of the program!

How Did I Learn This?

Like I said when I introduced enums, I first learned about them in C. However, knowing about a programming concept fluently doesn’t mean I understand why I would want to use it, or help me think about applying it when I’m tackling a new problem. I did not ever think about why enums would be used, until I saw their usage in Servo- every option or variable with multiple values has an enum for it! This is very helpful for my implementing the Fetch protocol, since there are many, many possible values for dozens of variables. I never have to worry about tedious testing or wasting time looking up possible values, since the Rust compiler will catch my mistakes for me!

Missing Garbage

Rust, the programming language Servo is written in, has over the past few months become very impressive to me. I’m not going to try to explain it at a high level, since I know Rust can speak better for itself than a newbie, but I’d like to focus on one aspect of it, one that stood out to me early on. Namely, how the language approaches garbage collection (basically, how a program automatically deals with freeing up memory not in use), and how that has helped me appreciate better what references are at a low level.

Some of the first sentences I read when I started learning Rust state that “Rust is a systems programming language focused on three goals: safety, speed, and concurrency. It maintains these goals without having a garbage collector…”. This made me think about my experience with the (in)famously low-level programming language C, where everything needs to be handled by the programmer, including much of the memory management. C is renowned for speed, but while I can’t comment on concurrency in C, I do know it has very little safety built in to the language.

Memory

Since Rust doesn’t force you to manually allocate or deallocate memory (which can easily lead to memory leaks and other dangerous bugs), my first idea of how Rust could manage having no garbage collector was that it must automatically handle all memory allocation. However, it seemed impossible to me that any language could not include an automated garbage collector, while also not making programmers deal with memory personally. That’s pretty much how it actually works though though! One of the biggest features of Rust, to me, is the compiler, which has a highly developed static analyzer. A static analyzer for code will analyze code to see what it does and catch many errors, without actually compiling and running it.

Where am I going with this? Well, the static analysis means the compiler can insert calls to memory handlers, assuming the code is valid Rust. If it’s not valid, and there’s an error with it, then the code won’t compile with a log of what’s wrong. It might sound from a glance that it’d be faster and simpler to handle memory by hand, and assume that the code is correct, instead of trying to convince the compiler that it’s good. But I think that 1) you have to program in languages like C for years to be that capable with handling memory, and 2) Rust’s compiler is one of the the friendliest (so to speak) out there- it offers solutions for commonly identified issues, and most errors in my experience come from misunderstanding how Rust works.

References In Rust

It is that final point, trying to understand how Rust works, that I appreciate the compiler for. By removing the intensity of juggling memory allocation, while still offering low-level benefits like C has, lets me focus on learning Rust. A lot of thinking energy I would have otherwise spent on figuring out why my pointer allocations are causing memory errors has been used instead for getting a good grasp on what Rust is doing, and why. I’m very grateful for this, since it has helped me learn much better than before how references in Rust (and in general) work!

So, I’ve said a lot about how I’ve been learning Rust, but I haven’t said much on what those references are actually doing, to reflect my knowledge. I’ll do so now, although I might get something a bit wrong, so please don’t take my word over what the Rust documentation says.

Changes In Understanding

Previously, I first got acquainted with references and pointers (a variable that holds a reference) by way of C (just like enums, above!). My grappling with the complexity of programming with pointers led to my understanding being summed up as “a pointer ‘points’ to another value somewhere else in memory”. That’s certainly a workable definition, but since I have a hard time using a concept I can’t explain to myself, I was held back in trying to write working code that uses pointers. As I’ve described above, it’s working with Rust that has given me more space to learn.

I would now define a reference declaration as “a variable whose value is the memory address offset of another value”; that is to say, a pointer is a small value that holds a reference to another value that might be of any size in memory. The benefit of this is being able to pass around or copy the pointer much more efficiently than what is being pointed to. It’s important to understand the difference between the pointer itself (a position in memory) and what it points to (which could be anything, including another pointer).

In Rust, pointers are frequently used for anything more complicated than a number or a boolean. Knowing what’s going on when pointers are created or passed around makes it much easier to both right valid code the first time around, and also to figure out pointer errors. Rust has two simple ways to interact with pointers: with &, let y = &x makes y a reference to x; and with *, *y gives the value that y is pointing to, which is x.

When I get an error regarding references, I used to approach this by cycling through configurations of & and * on the variables raising problems until the compiler was happy, but this was an often frustrating and time consuming experience. By slowly better learning how Rust is handling references, I’ve become able to trace through the code myself to check my own logic, and be able to figure out the correct answer!

Fifth post concluded.

http://nikkisquared.github.io/2016/02/22/things-ive-learned.html


Michael Kaply: Time For a (Job) Change

Понедельник, 22 Февраля 2016 г. 22:38 + в цитатник

I’ve always been passionate about Firefox in the enterprise. When I started Kaply Consulting eight years ago, I had always hoped that my primary work would be around enterprise Firefox. Unfortunately, that never really happened; enterprise work is less than 5% of what I do. So I do a lot of other work to support the things that I’m most passionate about. As those obligations have become bigger and bigger, I’ve found less and less time to work on the CCK2 and other enterprise related projects which I believe are important to the future of Firefox.

At the same time, with the changes coming to the way extensions are done in Mozilla, I even wondered if I wanted to continue doing what I do. After watching a video of myself (skip to 6:30) speaking about Mozilla 11 years ago, I came to the conclusion that Mozilla is in my blood. Heck, I’ve been doing this stuff since before Mozilla even existed (can you say Netscape 2.0.2?)

Combined with the increasing costs of being self employed and the increasing needs of my family, I’ve decided the best way forward for me and my family is to become a company man again.

And what better company to work for then…

Mozilla.

Yes, after years of working ON Mozilla, I’m finally working FOR Mozilla.

And I’m quite excited about the opportunity. I’ll be working on the partner distribution team at Mozilla.

I will continue to do enterprise work on the side as long as it doesn’t conflict with any of my Mozilla work, so there’s no need to worry about that. And since I won’t have to juggle as many things, I should have more free time to work on the CCK2.

TL;DR – I’m going to work for Mozilla. Kaply Consulting will still do CCK2 and enterprise.

https://mike.kaply.com/2016/02/22/time-for-a-job-change/


Mark Finkle: Fun with Telemetry: DIY User Analytics Lab in SQL

Понедельник, 22 Февраля 2016 г. 22:31 + в цитатник

Firefox on Mobile has a system to collect telemetry data from user interactions. We created a simple event and session UI telemetry system, built on top of the core telemetry system. The core telemetry system has been mainly focused on performance and stability. The UI telemetry system is really focused on how people are interacting with the application itself.

Event-based data streams are commonly used to do user data analytics. We’re pretty fortunate to have streams of events coming from all of our distribution channels. I wanted to start doing different types of analyses on our data, but first I needed to build a simple system to get the data into a suitable format for hacking.

One of the best one-stop sources for a variety of user analytics is the Periscope Data blog. There are posts on active users, retention and churn, and lots of other cool stuff. The blog provides tons of SQL examples. If I could get the Firefox data into SQL, I’d be in a nice place.

Collecting Data

My first step is performing a little ETL (well, the E & T parts) on the raw data using Spark/Python framework for Mozilla Telemetry. I wanted to create two dataset:

  • clients: Dataset of the unique clients (users) tracked in the system. Besides containing the unique clientId, I wanted to store some metadata, like the profile creation date. (script)
  • events: Dataset of the event stream, associated to each client. The event data also has information about active A/B experiments. (script)

Building a Database

I installed Postgres on a Mac Mini (powerful stuff, I know) and created my database tables. I was periodically collecting the data via my Spark scripts and I couldn’t guarantee I wouldn’t re-collect data from the previous jobs. I couldn’t just bulk insert the data. I wrote some simple Python scripts to quickly import the data (clients & events), making sure not to create any duplicates.

fennec-telemetry-data

I decided to start with 30 days of data from our Nightly and Beta channels. Nightly was relatively small (~330K rows of events), but Beta was more significant (~18M rows of events).

Analyzing and Visualizing

Now that I had my data, I could start exploring. There are a lot of analysis/visualization/sharing tools out there. Many are commercial and have lots of features. I stumbled across a few open-source tools:

  • Airpal: A web-based query execution tool from Airbnb. Makes it easy to save and share SQL analysis queries. Works with Facebook’s PrestoDB, but doesn’t seem to create any plots.
  • Re:dash: A web-based query, visualization and collaboration tool. It has tons of visualization support. You can set it up on your own server, but it was a little more than I wanted to take on over a weekend.
  • SQLPad: A web-based query and visualization tool. Simple and easy to setup, so I tried using it.

Even though I wanted to use SQLPad as much as possible, I found myself spending most of my time in pgAdmin. Debugging queries, using EXPLAIN to make queries faster, and setting up indexes. It was easier in pgAdmin. Once I got the basic things figured out, I was able to more efficiently use SQLPad. Below are some screenshots using the Nightly data:

sqlpad-query

sqlpad-chart

Next Steps

Now that I have Firefox event data in SQL, I can start looking at retention, churn, active users, engagement and funnel analysis. Eventually, we want this process to be automated, data stored in Redshift (like a lot of other Mozilla data) and exposed via easy query/visualization/collaboration tools. We’re working with the Mozilla Telemetry & Data Pipeline teams to make that happen.

A big thanks to Roberto Vitillo and Mark Reid for the help in creating the Spark scripts, and Richard Newman for double-dog daring me to try this.

http://starkravingfinkle.org/blog/2016/02/fun-with-telemetry-diy-user-analytics-lab-in-sql/


Air Mozilla: Mozilla Weekly Project Meeting, 22 Feb 2016

Понедельник, 22 Февраля 2016 г. 22:00 + в цитатник

Andrew Truong: Experience, Learn, Revitalize and Share: Introduction

Понедельник, 22 Февраля 2016 г. 21:05 + в цитатник
Back in December, I wrote that I would be making some blog postings. Unfortunately, due to catching pneumonia, that didn't take place.

Starting this month, I will be taking you through the journey of my life, how I got involved with Mozilla and how I restructured that way I do things because of what I've experienced. Throughout the journey, my involvement with Mozilla also shaped who I am today, and how I've taken the skills I've learned in the online world and to use it in real life. This journey will start with my reflection through elementary school and end with the current times. However, these blog posts may not arrive in the order of what I experienced, but rather, based on importance. The blog posts will not be used for me to rant, but instead they are to share my life's voyage through times where I was high on life and at times when it got really low. Mainly, the sole purpose of this, is for me to share what I've experienced so far, what I took to action, what I could've done better and to end with that: Welcome to Experience, Learn, Revitalize and Share...

To be alerted of when my next blog post comes out, follow me on Instagram: https://instagram.com/drewtru

http://feer56.blogspot.com/2016/02/experience-learn-revitalize-and-share-introduction.html


QMO: Firefox 45 Beta 7 Testday Results

Понедельник, 22 Февраля 2016 г. 19:15 + в цитатник

Greetings mozillians!

Last week – on Friday, February 19th – we held Firefox 45.0 Beta 7 Testday, another successful event!

First, many thanks go out to Iryna Thompson, Dinesh Mv, gaby2300, Ilse Macias, 'Angel Antonio, Kushagra Varade, Bolaram Paul and Bangladesh Community: Hossain Al IkramNazir Ahmed SabbirAzmina Akter Papeya, Khalid Syfullah Zaman, Shaily Roy, Samad Talukdar, Mohammad Maruf IslamAmlan Biswas, Md.Tarikul Islam Oashi, Sayed Mohammad Amir, Kazi Nuzhat Tasnem, Forhad Hossain, Saddam Hossain, Tanvir Rahman, Tazin Ahmed, Sajedul Islam, Anika Nawar, Tahsan Chowdhury Akash, Maruf Rahman, Maruf Hasan Shakil, Habiba Bint Obaid, Sashoto Seeam, Sayed Ibn Masud, Kazi Sakib Ahmad, Fazle Rabbi, Anik Roy, Hasibul Hasan Shanto, Md. Almas Hossain Tushar, Rezaul Huque Nayeem, Bhaskar Sarkar, Mamunur Rashid Hridoy, Dhiman Roy, Asif Mahmud Shuvo, Wahiduzzaman Hridoy, Mohammed Jawad Ibne Ishaque, Mehedi Hasan, Md. Ehsanul Hassan, Shantanu Kumar Rahut, Mahadi Hasan, Md Kamrul Hasan, Ashickur Rahman, Asiful Kabir, Mohammad Mosfiqur Rahman, Md. Nazmus Shakib Robin and Md. Rahimul Islam for getting involved – your help is always greatly appreciated!

The etherpad was updated. Please follow the instructions regarding the failed cases.

Secondly, a big thank you to all our active moderators!

Results:

Keep an eye on QMO for upcoming events!

https://quality.mozilla.org/2016/02/49476/


Mark Finkle: Firefox on Mobile: A/B Testing and Staged Rollouts

Понедельник, 22 Февраля 2016 г. 16:26 + в цитатник

We have decided to start running A/B Testing in Firefox for Android. These experiments are intended to optimize specific outcomes, as well as, inform our long-term design decisions. We want to create the best Firefox experience we can, and these experiments will help.

The system will also allow us to throttle the release of features, called staged rollout or feature toggles, so we can monitor new features in a controlled manner across a large user base and a fragmented device ecosystem. If we need to rollback a feature for some reason, we’d have the ability to do that, quickly without needing people to update software.

Technical details:

  • Mozilla Switchboard is used to control experiment segmenting and staged rollout.
  • UI Telemetry is used to collect metrics about an experiment.
  • Unified Telemetry is used to track active experiments so we can correlate to application usage.

What is Mozilla Switchboard?

Mozilla Switchboard is based on Switchboard, an open source SDK for doing A/B testing and staged rollouts from the folks at KeepSafe. It connects to a server component, which maintains a list of active experiments.

The SDK does create a UUID, which is stored on the device. The UUID is sent to the server, which uses it to “bucket” the client, but the UUID is never stored on the server. In fact, the server does not store any data. The server we are using was ported to Node from PHP and is being hosted by Mozilla.

We decided to start using Switchboard because it’s simple, open source, has client code for Android and iOS, saves no data on the server and can be hosted by Mozilla.

Planning Experiments

The Mobile Product and UX teams are the primary drivers for creating experiments, but as is common on the Mobile team, ideas can come from anywhere. We have been working with the Mozilla Growth team, getting a better understanding of how to design the experiments and analyze the metrics. UX researchers also have input into the experiments.

Once Product and UX complete the experiment design, Development would land code in Firefox to implement the desired variations of the experiment. Development would also land code in the Switchboard server to control the configuration of the experiment: On what channels is it active? How are the variations distributed across the user population?

Since we use Telemetry to collect metrics on the experiments, the Beta channel is likely our best time period to run experiments. Telemetry is on by default on Nightly, Aurora and Beta; and Beta is the largest user base of those three channels.

Once we decide which variation of the experiment is the “winner”, we’ll change the Switchboard server configuration for the experiment so that 100% of the user base will flow through the winning variation.

Yes, a small percentage of the Release channel has Telemetry enabled, but it might be too small to be useful for experimentation. Time will tell.

What’s Happening Now?

We are trying to be very transparent about active experiments and staged rollouts. We have a few active experiments right now.

  • Onboarding A/B experiment with several variants.
  • Easy entry points for accessing History and Bookmarks on the main menu.
  • Experimenting with the awesomescreen behavior when displaying search results page.

You can always look at the Mozilla Switchboard configuration to see what’s happening. Over time, we’ll be adding support to Firefox for iOS as well.

http://starkravingfinkle.org/blog/2016/02/firefox-on-mobile-abtesting-and-staged-rollouts/


The Mozilla Blog: Celebrating Our Ford-Mozilla Open Web Fellows, and Looking Ahead

Понедельник, 22 Февраля 2016 г. 16:25 + в цитатник

Today, the Internet we love and treasure is facing serious threats. Issues like mass surveillance and walled gardens, along with calls to weaken online security, increasingly endanger the Internet’s openness. Most recently, we saw the FBI ask Apple to circumvent their own devices’ security protections, setting a dangerous precedent that threatens consumers’ security. And in many parts of the world, especially emerging markets, inclusion and equality online aren’t guaranteed.

To address these threats, the Internet needs a new breed of advocate: individuals with both a technologist’s savvy and an activist’s zeal. We need advocates who can stand up for critical issues like privacy, inclusion, and literacy online, and ensure the Internet remains a public resource.

APPLY TODAY TO BE A FORD-MOZILLA OPEN WEB FELLOW

In 2015, Mozilla and the Ford Foundation launched the Open Web Fellows program to foster this type of advocate. We built an international leadership initiative to embed bright and passionate technology talent at leading civil society organizations. It’s a necessary step, and a topic we discussed in the Washington Post when the fellowship debuted.

Says Jenny Toomey, Ford Foundation’s Director of Internet Freedom:

“Technology is transforming every aspect of our world. But there aren’t enough technologists who are prepared to lend their vision to the public sector and make sure we’re building the kind of critical systems that can protect and empower us all. We need to make sure that the rights we have fought so hard to achieve are upheld and strengthened in the digital space. And we need to make sure that the people who are working to challenge inequality have the tools and infrastructure they need to do it well. That means having a diverse, creative cohort of public interest technologists, designers, and engineers working within civil society and governments.”

Now, almost 12 months later, we’re accepting applications for the second cohort of Open Web Fellows. This upcoming cohort of fellows will embed at civil society organizations on four continents.

We’re also celebrating the successes of our 2015 fellows. They’ve accomplished amazing things, which we share below.

To paint a clear picture of the Fellowship program and its impact, we asked our 2015 fellows to explain in their own words. Below, you’ll hear from Andrea Del Rio at Association for Progressive Communications (APC); Tennyson Holloway at Public Knowledge; Paola Villarreal at ACLU Massachusetts; Gem Barrett at Open Technology Institute (OTI); Drew Wilson at Free Press; and Tim Sammut at Amnesty International.

What does an Open Web Fellow look like?

There’s no formula or singular mold: Fellows are data architects and women’s rights activists, developers and designers. They hail from four countries and various points in their careers. But they share a common belief: The world can be made a better place by leveraging the open Internet.

Open Web Fellows are very talented technical people, but also have a sense of social duty. That’s what sets us apart.” — Andrea Del Rio

We all share a core understanding that an open Internet is important to modern society, and it needs to be protected.” — Tennyson Holloway

We all come at the open Web from different angles. We’re passionate about one topic, but come at it from a range of backgrounds with a holistic approach.” — Gem Barrett

What does an Open Web Fellow do?

Fellows dream up and create projects at the intersection of the Internet and civil society. They write code, develop apps, and pen blog posts. They host podcasts, attend conferences, and lead workshops. More broadly, fellows engage with the most important issues facing the Internet today: surveillance, inclusion, equality. Fellows work across organizations, borders, and time zones, networking and collaborating with like-minded technologists and do-gooders.

We’re pioneers. We’re technologists working for nonprofits doing relevant work.” — Andrea Del Rio

It’s about data and open source tools and advocacy. It’s about benefiting from the open Web.” — Paola Villarreal

It’s about making sure the Internet remains open and accessible for everyone. It’s also about expanding freedoms online to more people globally.” — Tim Sammut

Fellows are interested in social change activities in the long-term.” — Drew Wilson

Why apply?

Open Web Fellows have the opportunity to fight on the front lines of the open Internet movement. They help some of the world’s most established NGOs and civil society organizations navigate the vibrant and increasingly important realm of Internet advocacy. And fellows build valuable relationships with like-minded advocates.

Having access to all these tools, information, support, people, and resources is a life-changing experience. The work I have been doing during this fellowship has had, and will have, an impact.” — Paola Villarreal

You’re part of a much larger movement, and that’s definitely rewarding.” — Tennyson Holloway

If someone is passionate about improving the world, and wants a springboard into doing that as a career, this is for them.” — Gem Barrett

One of the best things about the fellowship has been the people I met in the Internet freedom community.” — Tim Sammut

What are the host organizations?

Key to the Open Web Fellow program are our host organizations: leading nonprofits around the globe devoted to improving the Internet and the lives of everyone it touches. Host organizations have diverse ambits, from law and human rights to gender equality and press freedom.

Our 2015 host organizations are the American Civil Liberties Union (ACLU) Massachusetts, Amnesty International, Association for Progressive Communications (APC), Free Press, Open Technology Institute (OTI), and Public Knowledge.

Our 2016 host organizations are Centre for Intellectual Property and Information Technology Law, The Citizen Lab at Munk School of Global Affairs, ColorOfChange.org, Data & Society, Derechos Digitales, European Digital Rights (EDRi), Freedom of the Press Foundation, and Privacy International.

The Open Web Fellows Program connects the incredible wealth of tech talent with the justice-minded organizations that so badly need their skills. This is not just about bringing technologists into civil society organizations and government, but about strengthening critical institutions and helping them rise to meet the challenges of the digital age — some of which haven’t even been identified yet.” — Jenny Toomey, Director of Internet Freedom, Ford Foundation

Our 2015 Fellows

Andrea

 

Andrea Del Rio,

Association for Progressive Communications (APC)

 

Andrea Del Rio is embedded at APC, the South Africa-based nonprofit expanding women’s rights and gender equality online with focus on the global south. Andrea’s digital savvy allows APC to advance this mission and present their work in a more dynamic and impactful way. Andrea is crafting an interactive platform for APC’s “Feminist Principles of the Internet,” a treatise bridging the gap between the feminist movement and the Internet rights movement. She is transforming the static document into an interactive community where activists can talk and share resources. When complete, the platform will live at http://feministinternet.net/.

I try to make a difference on the user interface.”

In November 2015, Andrea led a gender equality session at MozFest, Mozilla’s annual celebration of the open Internet. The session — “A Feminist Internet in 140 Characters” — brought together diverse makers, designers, and technologists who authored a list of open Web feminist principles.

The feminist principles of the Internet should be relevant to anyone who loves the Internet and is interested in gender equality.”

During her tenure as a fellow, Andrea has traveled to Malaysia, the Philippines, Mexico, the U.S., the UK, and South Africa.

Tennyson

 

Tennyson Holloway,

Public Knowledge

 

Tennyson works alongside Public Knowledge, the advocacy organization based in Washington, DC. Its scope: issues at the intersection of public interest and technology. Public Knowledge explores and comments on troubling corporate mergers; advocates for issues like net neutrality; and upholds consumer protections. It’s here that Tennyson functions as a sorely-needed technologist among lawyers and policy experts.

Tennyson is also devoted to a series of self-directed projects. He created the SMS Vote Updater, a tool for subscribing to and monitoring legislators’ voting. Users text their zip code to the service and quickly receive a list of relevant legislators. Users then subscribe to select policymakers — and when policymakers vote on a bill in Congress, users get a notification detailing the vote and bill.

I was really excited to build this. I like the idea of increasing access. You can keep an eye on your legislator.”

Tennyson is also building whatcanidofortheinternet.org, a collection of resources and stories that detail how individuals can contribute to the Internet. The site motivates others to improve the Internet, and serves as a friendly gateway to the open Internet movement.

Alongside Fellows Andrea Del Rio and Drew Wilson, Tennyson produces the NetPosi podcast. The trio interviews technologists making a mark in the world of activism (or vice versa). Guests include Cory Doctorow and Wendy Seltzer.

It’s a podcast about the intersection of activism and technology.”

PV

 

Paola Villarreal,

ACLU Massachusetts

 

Paola is embedded at ACLU Massachusetts, a staple in the fight for individual rights and liberties. Here, Paola brings a technologist’s savvy (16 years of IT experience) to the world of social justice. She writes code and analyzes gigabits of data to battle inequality.

Paola’s capstone work is Data for Justice, an ambitious initiative that connects activists with data so they can drive change in their communities.

It empowers activists and advocates to make data-driven decisions.”

Specifically, Paola’s project analyzes data from the Boston Police Department and several other sources, spotlighting discriminatory practices. Findings are then showcased using a data visualization framework, titled Augmented Narrative Toolkit, developed explicitly for this project. And in true open source form, the Data for Justice project can be adapted to other cities around the world.

Paola has also traveled extensively as an Open Web Fellow, plugging into pockets of the open Internet movement around the world. She has attended and spoke at open source and open government gatherings in Mexico City, London, Hamburg, Harvard University, and beyond.

Gem

 

Gem Barrett,

Open Technology Institute (OTI)

 

Gem works with Open Technology Institute (OTI), an arm of New America focusing on open source innovation. It’s here that Gem writes, programs, designs, and speaks. Specifically, Gem is helping OTI build out their transparency and open data initiatives.

An open Web fellow gets embedded into an organization and offers their unique skills in order to promote the open Web.”

Gem is also committed to a range of satellite projects. She’s penned articles about making the open source ecosystem more inclusive (here and here), and planned events that explore the intersection of gaming and social justice.

You have the freedom to explore other opportunities outside of the organization for promoting your passion.”

Drew

 

Drew Wilson,

Free Press

 

Drew works with Free Press, a nonprofit that advocates for a healthy and free fourth estate. Here, Drew has helped shape Internet2016, an initiative to inject net policy issues into the 2016 election discourse. The approach is decidedly grassroots: Internet 2016 galvanizes Internet advocates to dog politicians on topics like mass surveillance, encryption, and access. Drew lent both his tech and advocacy acumen, building the web page and consulting on campaign content.

Drew has also tackled a number of personal projects. He co-hosts the NetPosi podcast alongside fellows Andrea and Tennyson.

NetPosi shares stories from people who do interesting work at the intersection of social change and technology.”

Drew curates Tools for Activism, a resource that lists digital tools for activists and technologists. It recently snagged front-page real estate on GitHub. Drew also built a couple of experimental web-tools-for-activism prototypes: a meme generator targeting 2016 presidential candidates (the goal: “empower people to be more politically engaged online”), and Printernet, a web application that assists small NGOs with print mailings.

Tim

 

Tim Sammut,

Amnesty International

 

Amnesty International stands up for human rights around the globe. As an Open Web Fellow, Tim furthers this mission in a 21st-century fashion. Currently, Tim is helping Amnesty pilot GlobaLeaks, and open source platform for the safer submission of sensitive information.

Tim is also building the Secure Communications Framework, a reference model for human rights researchers and activists seeking the right tools and practices for sensitive work. It’s a matrix for identifying safer, more secure, and reliable channels for carrying out work in dangerous regions. The framework can help people maintain privacy and avoid arrest, detention, or worse.

[It’s for] a researcher that may be an expert in their field with first-hand knowledge of the challenges that surround them, but is uncertain which digital tools and practices will enable their work without simultaneously undermining their safety.”

https://blog.mozilla.org/blog/2016/02/22/celebrating-our-ford-mozilla-open-web-fellows-and-looking-ahead/


Mozilla Open Policy & Advocacy Blog: Mozilla stands up for public participation and openness in Trans-Pacific Partnership

Понедельник, 22 Февраля 2016 г. 12:28 + в цитатник

The Trans-Pacific Partnership (TPP), like many modern trade deals, encompasses complex aspects of Internet policy, yet the voice of the Internet community is excluded from the nearly decade long negotiations. As a result, the balance shifts away from users and the public interest. It is our belief that effective global Internet policy and governance decisions can’t be made without openness and that the TPP’s processes fail in this regard.

The lack of open processes and public discussion is a primary concern for us because:

  • Global Internet policy issues, including copyright and free expression, are complex and impact the core of openness online in ways that can’t be solved in isolation;
  • Openness is core to both the Internet (including Internet governance) and Mozilla’s mission and values; and
  • When Internet policy decisions and processes lack openness, lack of participation means that user interests are often undervalued and underserved.

We have seen this same thing happen in the past. In January 2012, PIPA/SOPA attempted to create intellectual property policy without public input. At the end of the same year, the World Conference on International Telecommunications (WCIT) attempted to build Internet governance processes without a public role. In both cases, public pressure prevailed and defeated these threats to openness and public benefit. Our concern is that when these same threats come cloaked within trade deals, they may not be visible as threats until the damage has already been done.

In the final draft of the TPP, we see copyright losing ground with the balance tipping away from users and the public interest and towards businesses built on IP maximization. Provisions are strong where the rights of some major institutions and traditional business models are at stake, such as implementing software patent frameworks, expanding copyright terms (with retroactive effect), and establishing minimum damages for copyright infringement. Yet, the provisions that have been added to support the rights of the public are softer, including those related to public domain and limitations and exceptions to copyright.

End of January 2016, the Electronic Frontier Foundation (EFF) organized a strategy meeting on creating reform of trade negotiation processes — a two day summit held in Brussels. Over 30 diverse organizations – including Mozilla – came together to collectively discuss strategy and tactics on how to improve transparency in the negotiation processes for current and future trade deals. The result was a declaration being released today, which Mozilla has signed.

While we recognize there may be compelling reasons for sensitivity in some of the negotiations of the TPP and other trade agreements, our view is that these processes are not appropriate to resolve global Internet policy challenges. The future of Internet policy and governance issues must be determined through open and transparent processes that allow all voices to be heard and all rights to be fairly weighed. We look forward to working alongside other stakeholders to collectively forge needed reform of trade deals like the TPP.

https://blog.mozilla.org/netpolicy/2016/02/22/mozilla-stands-up-for-public-participation-and-openness-in-trans-pacific-partnership/


Mike Taylor: Dispatching legacy webkit prefixed events (but only some of the time)

Понедельник, 22 Февраля 2016 г. 09:00 + в цитатник

Here's a twist on the classic "browser has bug, developers change to work around it, sites depend on it, browser has to implement weird workaround to fix their bug without breaking those sites, and then other browsers need to match weird workaround bug compatibility" cycle.

In this case, Bug 1236930 reported that zooming in on Strava maps only worked once. Strava (and a ton of other popular sites) uses a slick little mapping libary called Leaflet.js, and if our work to ship WebKit-prefixed CSS and DOM aliases (check out Bug 1170774) is actually going to uh, ship, we obviously can't break it.

You can read the whole bug later; here's the problem distilled to two lines of JS (see if you can find it):

o.DomUtil.TRANSFORM=o.DomUtil.testProp(
   ["transform","WebkitTransform",
    "OTransform","MozTransform","msTransform"]),
o.DomUtil.TRANSITION=o.DomUtil.testProp(
   ["webkitTransition","transition",
    "OTransition","MozTransition","msTransition"])

You see how they're trying to do the nice thing and test for the right transition and transform properties to use?

Once they know that, they construct the TRANSITION_END string to attach listeners with elsewhere in the code.

o.DomUtil.TRANSITION_END="webkitTransition"===o.DomUtil.TRANSITION||"OTransition"===o.DomUtil.TRANSITION?o.DomUtil.TRANSITION+"End":"transitionend",

Did you notice how o.DomUtil.TRANSITION is actually testing for webkitTransition before the prefixless transition? (I actually missed that my first time staring at this code, classic rookie move).

Once upon a time, Leaflet.js did the logical thing and tested for prefixless transition first, but in this sweet bugfix commit, that changed.

You can click through to get references to the bug it fixed, but here's a comment in the patch that gives you a gist of why they did this:

// webkitTransition comes first because some browser versions that drop vendor prefix don't do
// the same for the transitionend event, in particular the Android 4.1 stock browser

So at some point in time*, some stock Android browser versions supported prefixless CSS transitions, but forgot to unprefix the transitionend event. And websites broke, and libraries updated to workaround them.

So we added support to sometimes send webkit prefixed transitionend events (and animationend, animationiteration and animationstart) to Gecko in bug 1236979, matching WebKit, Blink, and Edge's behavior.

If you want more details on when to send those events, check out the bug. Or for extra credit, read the DOM spec. We updated that too.

(* Wikipedia says Jelly Bean was released in June 2012, which was when Gotye's 'Somebody That I Used to Know' Feat. Kimbra was the #1 song so I guess we all sort of deserve this mess, honestly.)

https://miketaylr.com/posts/2016/02/dispatching-magical-webkit-prefixed-events.html


This Week In Rust: This Week in Rust 119

Понедельник, 22 Февраля 2016 г. 08:00 + в цитатник

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

This week's edition was edited by: Vikrant and llogiq.

Updates from Rust Community

News & Blog Posts

Notable New Crates & Project Updates

  • termpix. Draw images in an ANSI terminal.
  • tarpc. An RPC framework for Rust with a focus on ease of use.
  • Rust + Haskell experiments in software rasterization, N-Body simulation and Game of Life.
  • rust-wlc. Rust bindings for wlc, the Wayland compositor library.
  • Afterparty. A library for building Github webhook integrations in Rust.
  • Hubcaps. A Rust interface for GitHub.

Updates from Rust Core

107 pull requests were merged in the last week.

Notable changes

New Contributors

  • Chad Shaffer
  • G"okhan Karabulut
  • Jack O'Connor
  • rphmeier
  • Vlad Ureche

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

fn work(on: RustProject) -> Money

Tweet us at @ThisWeekInRust to get your job offers listed here!

GSoc Project

Hi students! Looking for an awesome summer project in Rust? Look no further! Chris Holcombe from Canonical is an experienced GSoC mentor and has a project to implement CephX protocol decoding. Check it out here.

Crate of the Week

This week's Crate of the Week is Diesel, a rustic typesafe extensible object-relational mapper and query builder. Just go to their site; the examples speak for themselves.

Thanks to LilianMoraru and DroidLogician for the suggestion.

Submit your suggestions for next week!

Quote of the Week

There is essentially no webpage out there that it cannot get through at multiple hundreds of frames-per-second

pcwalton on Servo's astonishing new WebRender technology.

Thanks to adwhit for the suggestion.

Submit your quotes for next week!

https://this-week-in-rust.org/blog/2016/02/22/this-week-in-rust-119/



Поиск сообщений в rss_planet_mozilla
Страницы: 472 ... 243 242 [241] 240 239 ..
.. 1 Календарь