-Поиск по дневнику

Поиск сообщений в rss_planet_mozilla

 -Подписка по e-mail

 

 -Постоянные читатели

 -Статистика

Статистика LiveInternet.ru: показано количество хитов и посетителей
Создан: 19.06.2007
Записей:
Комментариев:
Написано: 7

Planet Mozilla





Planet Mozilla - https://planet.mozilla.org/


Добавить любой RSS - источник (включая журнал LiveJournal) в свою ленту друзей вы можете на странице синдикации.

Исходная информация - http://planet.mozilla.org/.
Данный дневник сформирован из открытого RSS-источника по адресу http://planet.mozilla.org/rss20.xml, и дополняется в соответствии с дополнением данного источника. Он может не соответствовать содержимому оригинальной страницы. Трансляция создана автоматически по запросу читателей этой RSS ленты.
По всем вопросам о работе данного сервиса обращаться со страницы контактной информации.

[Обновить трансляцию]

Robert O'Callahan: GNOME High-DPI Issues

Суббота, 02 Апреля 2016 г. 23:51 + в цитатник

Most sources advise using gnome-tweak-tool to set "Window Scaling" to 2 (or whatever). This alone doesn't work adequately for me; when I make my external 4K monitor primary, gnome-settings-daemon decides to set Xft.dpi to 96 because it thinks my monitor DPI is too low for scaling, even though I have set window scaling to 2. Arguably this is a gnome-settings-daemon bug. The result is that most apps look fine but some, including gnome-shell, care about Xft.dpi and don't scale. I fixed the problem by manually setting

gsettings set org.gnome.desktop.interface scaling-factor 2
and now everything's working reasonably well. Some apps still don't support proper scaling (e.g. Eclipse) but at least they're usable when they weren't before. Still, on the Web we really did this right: make all apps work at high-DPI out of the box, scaling up individual bitmaps when necessary and rendering at high resolution automatically when we can (e.g. text), and letting DPI-aware apps opt into higher-resolution bitmaps.

My Dell XPS15 with 4K screen, and an external Dell 4K monitor, look sweet. The main problem remaining is that you can't drive a 4K monitor at 60Hz over HDMI, and this laptop doesn't have DisplayPort built in; I need a Thunderbolt3-to-DisplayPort connector which is supposed to exist but you can hardly find anywhere yet. Downside of bleeding edge tech.

Google Photos does the Right Thing by scaling images on the server to the specific window size. So you can upload original-size photos and it'll automatically pull down a massive image when you're viewing fullscreen on a 4K monitor. Now I can really tell the difference between my camera and a good camera.

Enabling touch events in Firefox and launching with MOZ_USE_XINPUT2=1 gives me touch event support in my laptop (both for Web apps and async scrolling); nice.

Forcing UI scale factors to be an integer is a mistake. In Gecko we can scale Web content by non-integer amounts and it almost never fails (or at least the same failure would have occurred with an integer scale), so I'm pretty sure there's no reason regular desktop toolkits can't do the same.

http://robert.ocallahan.org/2016/04/gnome-hidpi-issues.html


Support.Mozilla.Org: Mozillian Profile: Heather

Суббота, 02 Апреля 2016 г. 13:00 + в цитатник

Hello, SUMO Nation!

We’re back to hearing more from you about your Mozillian story. This time, I have the pleasure and honour to share Heather’s inspiring story with you:

Smile!

Smile!

I contribute to SUMO because it’s something that inspires me and has helped me realize the potential of my writing superpowers. The open source movement and community is why I love Firefox.  Where else can you be considered a part of something bigger than yourself and also be seen as a part of the whole picture, even if you are not an official Mozilla employee?

I do it because I want to. I do it because it’s something I believe in. I do it for the connection to Mozilla, its people, the community and its efforts to continue to support and instil the open web practices and privacy beliefs.

I began my involvement with SUMO in November 2014.  My search for jobs in the technical writing field is what led me to Mozilla. I had gotten my associates degree and was seeking to gain employment as a technical writer. Little did I know, one does not just become a technical writer because you have taken a course and received your IT degree – silly me :-).

I did my research and saw that experience was an essential need on the path to becoming a technical writer.  Well, how do I get experience, if experience is what employers seek?  Here’s where I found and pretty much stumbled into becoming a volunteer contributor for SUMO.

Putting things together from my web research by reading and seeing testimonials from those who had began on the same path as I had, the advice I got was to become a volunteer writer for an open source project and when I saw that Mozilla welcomed volunteer contributors with open arms, I stuck my foot in the door. I took a step forward and haven’t looked back since.

It’s been an amazing ride and I feel I have Mozilla employees and contributors to thank for helping me jump start my current path into being a freelance technical writer.  Without the community support and engagement I have thrived on as being part of SUMO, I don’t know how else I would have gotten here.

I have done some support for users in the Mozilla support forums, but feel like my writing for the Knowledge Base has been the more substantial part of my contributions. I have not only written original content with the backing of Joni (the brain behind the KB), I’ve also contributed to previously written articles by other amazing contributors and staff.  I’ve been able to learn how to use wikis, create and add screenshots to enhance the Knowledge Base articles, and just be part of the team.

I have gotten to work with the amazing Roland on the iOS User Success Team along with other SUMO volunteers and the likes of Madalina and Michal (the other brains behind the scenes). Meeting and collaborating with the SUMO team during the Whistler Work Week in Canada during the Summer of June 2015 was quite an epic thing to take part in as well.

SUMO is more than just being a volunteer contributor to me.  It’s been an amazing experience and I continually look forward to being involved with SUMO team for the (un)foreseeable future.

Thank you, Heather! We’re happy to hear that SUMO made it easier for you to do what you like outside of Mozilla :-).

Do you want to write a guest blog post for our blog? Let us know in the comments or contact Michal directly.

https://blog.mozilla.org/sumo/2016/04/02/mozillian-profile-heather/


Jeff Walden: Quote of the day

Пятница, 01 Апреля 2016 г. 22:24 + в цитатник

April 1. This is the day upon which we are reminded of what we are on the other three hundred and sixty-four.
—Pudd’nhead Wilson’s Calendar

(Or three hundred and sixty-five, this year.)

http://whereswalden.com/2016/04/01/quote-of-the-day-5/


Dave Herman: Native JS Classes in Neon

Пятница, 01 Апреля 2016 г. 18:47 + в цитатник

Last weekend I landed a PR that adds support for defining custom native classes in Neon. This means you can create JavaScript objects that internally wrap—and own—a Rust data structure, along with methods that can safely access the internal Rust data.

As a quick demonstration, suppose you have an Employee struct defined in Rust:

pub struct Employee {
    id: i32,
    name: String,
    // etc ...
}

You can expose this to JS with the new declare_types! macro:

declare_types! {

    /// JS class wrapping Employee records.
    pub class JsEmployee for Employee {

        init(call) {
            let scope = call.scope;
            let id = try!(try!(call.arguments.require(scope, 0)).check::());
            let name = try!(try!(call.arguments.require(scope, 1)).to_string());
            // etc ...
            Ok(Employee {
                id: id.value() as i32,
                name: name.value(),
                // etc ...
            })
        }

        method name(call) {
            let scope = call.scope;
            let this: Handle = call.arguments.this(scope);
            let name = try!(vm::lock(this, |employee| {
                employee.name.clone()
            });
            Ok(try!(JsString::new_or_throw(scope, &name[..])).upcast())
        }
    }

};

This defines a custom JS class whose instances contain an Employee record. It binds JsEmployee to a Rust type that can create the class at runtime (i.e., the constructor function and prototype object). The init function defines the behavior for allocating the internals during construction of a new instance. The name method shows an example of how you can use vm::lock to borrow a reference to the internal Rust data of an instance.

From there, you can extract the constructor function and expose it to JS, for example by exporting it from a native module:

register_module!(m, {
    let scope = m.scope;
    let class = try!(JsEmployee::class(scope));       // get the class
    let constructor = try!(class.constructor(scope)); // get the constructor
    try!(m.exports.set("Employee", constructor));     // export the constructor
});

Then you can use instances of this type in JS just like any other object:

var Employee = require('./native').Employee;

var lumbergh = new Employee(9001, "Bill Lumbergh");
console.log(lumbergh.name()); // Bill Lumbergh

Since the methods on Employee expect this to have the right binary layout, they check to make sure that they aren’t being called on an inappropriate object type. This means you can’t segfault Node by doing something like:

Employee.prototype.name.call({});

This safely throws a TypeError exception just like methods from other native classes like Date or Buffer do.

Anyway, that’s a little taste of user-defined native classes. More docs work to do!

http://calculist.org/blog/2016/04/01/native-js-classes-in-neon/?utm_source=all&utm_medium=Atom


Air Mozilla: Participation Q1 Demos - 2016

Пятница, 01 Апреля 2016 г. 17:00 + в цитатник

Participation Q1 Demos - 2016 Join the Participation Team for our final Demos Presentation of the quarter. Find out what we've been working on, accomplished, and learned in the past...

https://air.mozilla.org/participation-q1-demos-2016/


Karl Dubost: [worklog] Atami, a mix of new and old

Пятница, 01 Апреля 2016 г. 12:20 + в цитатник

I was in Atami, last Sunday, a city I like very much, because it is a bit an image of the Web. It is broken everywhere. There are many ghost buildings, abandonned real estate projects, failed businesses and the flow of new tourist and useless souvenirs. There's something with Atami which is pulling me down to Earth. Tune of the week: Gravity - John Mayer.

Webcompat Life

Progress this week:

Today: 2016-04-02T07:40:06.748767
381 open issues
----------------------
needsinfo       3
needsdiagnosis  125
needscontact    42
contactready    97
sitewait        115
----------------------

You are welcome to participate

We had a webcompat team meeting on March 29, 2016.

Upgraded Mercurial, there is a security issue.

Adam Stevenson is restarting on the WebCompat team, and that's very good news for WebCompat. That's also cool for the Work week in London.

Talking about London. Trying to book my trip to London from Normandy in France to London. I will be flying from Japan to France in June 2016. So I will go to London from Dieppe by boat + train.

Webcompat issues

(a selection of some of the bugs worked on this week).

Gecko Bugs

Webcompat.com development

  • Mike started a discussion about classes of webcompat issues on webcompat.com. The discussion is harder than expected, because we maybe don't know what is the issue we are trying to solve. And maybe it's fine to drift. In the computing world, we have a tendency sometimes to be too focused (Problem solvers). There is also room and a need for drifting to be sure to explore the world and discover the hidden areas we can't see (ice plates under the snow). We will figure out in the end. A kind of Who, what, when, where, why, and how.
  • [Dushalni][dushalni] will probably restart the work on documentation.
  • I started working on uploaded images optimization a couple of weeks ago, found there was an issue in Pillow library with regards to animated gif. So this week I decided to take a deep breath and dived a bit more in the library.

Reading List

Follow Your Nose

TODO

  • Document how to write tests on webcompat.com using test fixtures.
  • ToWrite: rounding numbers in CSS for width
  • ToWrite: Amazon prefetching resources with for Firefox only.

Otsukare!

http://www.otsukare.info/2016/04/01/worklog-cactus


Brian Birtles: Gecko insiders

Пятница, 01 Апреля 2016 г. 10:09 + в цитатник

At Mozilla Japan, we’ve been doing a series of monthly events called “Gecko inside” where we discuss details of hacking on Gecko in the hope of helping each other learn and helping new contributors to get started.

Last weekend we held a special “write a patch” day where we gathered a group of long-time contributors to mentor first-time contributors through the process of setting up a build environment, writing a patch, and getting it reviewed and landed.

gecko-inside

We fixed nearly a dozen bugs on the day and if you were hanging around on #developers about that time, you might have been surprised at the stream of Japanese names ticking by.

lots of Japanese commits

It was a fun event with veterans and first-time contributors alike asking if we could do it again.

Gecko internals

In keeping with the topic of new contributors, we were recently very pleased to have Ryo Motozawa join us for an internship during his university’s winter break. Ryo came to us with more curiosity than experience but quickly found his way around implementing WebIDL interfaces, animation timing features, and a number of DevTools features including exposing detailed animation performance information (due to land any day now!) using an interface Hiro recently built—all in just 2 months! Nice work Ryo!

ryo

(And, before you mention it, there’s already a bug to fix the text in that tooltip!)

Gecko geeks

Some of the other notable things we’ve been plugging away at here in Japan include:

Also, while far from a Japan-only effort, another animation-related feature I should mention is that thanks to platform work from Boris Chiou and Daisuke Akatsuka, and DevTools work from Patrick Brosset, the animation inspector now finally works with animations on pseudo-elements!

pseudo

They’re just a few of the things we’re excited about at the moment. Oh, and this view!

sakura-2


https://birtles.wordpress.com/2016/04/01/gecko-insiders/


Francois Marier: How Safe Browsing works in Firefox

Пятница, 01 Апреля 2016 г. 09:00 + в цитатник

Firefox has had support for Google's Safe Browsing since 2005 when it started as a stand-alone Firefox extension. At first it was only available in the USA, but it was opened up to the rest of the world in 2006 and moved to the Google Toolbar. It then got integrated directly into Firefox 2.0 before the public launch of the service in 2007.

Many people seem confused by this phishing and malware protection system and while there is a pretty good explanation of how it works on our support site, it doesn't go into technical details. This will hopefully be of interest to those who have more questions about it.

Browsing Protection

The main part of the Safe Browsing system is the one that watches for bad URLs as you're browsing. Browsing protection currently protects users from:

If a Firefox user attempts to visit one of these sites, a warning page will show up instead, which you can see for yourself here:

The first two warnings can be toggled using the browser.safebrowsing.malware.enabled preference (in about:config) whereas the last one is controlled by browser.safebrowsing.enabled.

List updates

It would be too slow (and privacy-invasive) to contact a trusted server every time the browser wants to establish a connection with a web server. Instead, Firefox downloads a list of bad URLs every 30 minutes from the server (browser.safebrowsing.provider.google.updateURL) and does a lookup against its local database before displaying a page to the user.

Downloading the entire list of sites flagged by Safe Browsing would be impractical due to its size so the following transformations are applied:

  1. each URL on the list is canonicalized,
  2. then hashed,
  3. of which only the first 32 bits of the hash are kept.

The lists that are requested from the Safe Browsing server and used to flag pages as malware/unwanted or phishing can be found in urlclassifier.malwareTable and urlclassifier.phishTable respectively.

If you want to see some debugging information in your terminal while Firefox is downloading updated lists, turn on browser.safebrowsing.debug.

Once downloaded, the lists can be found in the cache directory:

  • ~/.cache/mozilla/firefox/XXXX/safebrowsing/ on Linux
  • ~/Library/Caches/Firefox/Profiles/XXXX/safebrowsing/ on Mac
  • C:\Users\XXXX\AppData\Local\mozilla\firefox\profiles\XXXX\safebrowsing\ on Windows

Resolving partial hash conflicts

Because the Safe Browsing database only contains partial hashes, it is possible for a safe page to share the same 32-bit hash prefix as a bad page. Therefore when a URL matches the local list, the browser needs to know whether or not the rest of the hash matches the entry on the Safe Browsing list.

In order resolve such conflicts, Firefox requests from the Safe Browsing server (browser.safebrowsing.provider.mozilla.gethashURL) all of the hashes that start with the affected 32-bit prefix and adds these full-length hashes to its local database. Turn on browser.safebrowsing.debug to see some debugging information on the terminal while these "completion" requests are made.

If the current URL doesn't match any of these full hashes, the load proceeds as normal. If it does match one of them, a warning interstitial page is shown and the load is canceled.

Download Protection

The second part of the Safe Browsing system protects users against malicious downloads. It was launched in 2011, implemented in Firefox 31 on Windows and enabled in Firefox 39 on Mac and Linux.

It roughly works like this:

  1. Download the file.
  2. Check the main URL, referrer and redirect chain against a local blocklist (urlclassifier.downloadBlockTable) and block the download in case of a match.
  3. On Windows, if the binary is signed, check the signature against a local whitelist (urlclassifier.downloadAllowTable) of known good publishers and release the download if a match is found.
  4. If the file is not a binary file then release the download.
  5. Otherwise, send the binary file's metadata to the remote application reputation server (browser.safebrowsing.downloads.remote.url) and block the download if the server indicates that the file isn't safe.

Blocked downloads can be unblocked by right-clicking on them in the download manager and selecting "Unblock".

While the download protection feature is automatically disabled when malware protection (browser.safebrowsing.malware.enabled) is turned off, it can also be disabled independently via the browser.safebrowsing.downloads.enabled preference.

Note that Step 5 is the only point at which any information about the download is shared with Google. That remote lookup can be suppressed via the browser.safebrowsing.downloads.remote.enabled preference for those users concerned about sending that metadata to a third party.

Types of malware

The original application reputation service would protect users against "dangerous" downloads, but it has recently been expanded to also warn users about unwanted software as well as software that's not commonly downloaded.

These various warnings can be turned on and off in Firefox through the following preferences:

  • browser.safebrowsing.downloads.remote.block_dangerous
  • browser.safebrowsing.downloads.remote.block_dangerous_host
  • browser.safebrowsing.downloads.remote.block_potentially_unwanted
  • browser.safebrowsing.downloads.remote.block_uncommon

and tested using Google's test page.

If you want to see how often each "verdict" is returned by the server, you can have a look at the telemetry results for Firefox Beta.

Privacy

One of the most persistent misunderstandings about Safe Browsing is the idea that the browser needs to send all visited URLs to Google in order to verify whether or not they are safe.

While this was an option in version 1 of the Safe Browsing protocol (as disclosed in their privacy policy at the time), support for this "enhanced mode" was removed in Firefox 3 and the version 1 server was decommissioned in late 2011 in favor of version 2 of the Safe Browsing API which doesn't offer this type of real-time lookup.

Google explicitly states that the information collected as part of operating the Safe Browsing service "is only used to flag malicious activity and is never used anywhere else at Google" and that "Safe Browsing requests won't be associated with your Google Account". In addition, Firefox adds a few privacy protections:

  • Query string parameters are stripped from URLs we check as part of the download protection feature.
  • Cookies set by the Safe Browsing servers to protect the service from abuse are stored in a separate cookie jar so that they are not mixed with regular browsing/session cookies.
  • When requesting complete hashes for a 32-bit prefix, Firefox throws in a number of extra "noise" entries to obfuscate the original URL further.

On balance, we believe that most users will want to keep Safe Browsing enabled, but we also make it easy for users with particular needs to turn it off.

Learn More

If you want to learn more about how Safe Browsing works in Firefox, you can find all of the technical details on the Safe Browsing and Application Reputation pages of the Mozilla wiki or you can ask questions on our mailing list.

Google provides some interesting statistics about what their systems detect in their transparency report and offers a tool to find out why a particular page has been blocked. Some information on how phishing sites are detected is also available on the Google Security blog, but for more detailed information about all parts of the Safe Browsing system, see the following papers:

http://feeding.cloud.geek.nz/posts/how-safe-browsing-works-in-firefox/


Mozilla Addons Blog: April 2016 Featured Add-ons

Пятница, 01 Апреля 2016 г. 07:36 + в цитатник

Pick of the Month: WhatsApp™ Desktop

by Elen Norphen
Enjoy easy access to WhatsApp right from your browser, including incoming message notifications.

”Ahhhh sweet extension!”

Featured: Video Downloader Prime

by Mark
Simple download process for most popular video formats.

”Very easy to work with. It captures video very fast!”

Featured: Google Translate Anywhere

by Jeremy Schomery
Provides a floating multilingual panel for any word or phrase you highlight.

”Everything I want in a dictionary extension: fast pop-up translations, a button for listening to pronunciations, and also definitions. Great job!”

Nominate your favorite add-ons

Featured add-ons are selected by a community board made up of add-on developers, users, and fans. Board members change every six months, so there’s always an opportunity to participate. Stayed tuned to this blog for the next call for applications. Here’s further information on AMO’s featured content policies.

If you’d like to nominate an add-on for featuring, please send it to amo-featured@mozilla.org for the board’s consideration. We welcome you to submit your own add-on!

https://blog.mozilla.org/addons/2016/03/31/april-2016-featured-add-ons/


Cameron Kaiser: Progress to TenFourFox 45: milestone 1

Пятница, 01 Апреля 2016 г. 03:44 + в цитатник
TenFourFox 40 is a thing, but changesets only, available from SourceForge. It starts up, passes the JavaScript test suite, does browsery things and doesn't crash. Unfortunately 40 took longer than I had planned to get off the ground and I think we'll have to do a "38.9" to buy us one more cycle; in the meantime, I've decided to skip a step by jumping directly to 43 and cross my fingers that it works. If I find a serious regression, I'll have to decide whether I want to back up to 41 or 42. If not, the third milestone will be the first 45 beta.

Builders take note that MacPorts gcc 4.8 is now required; gcc 4.6 will no longer build the browser and even adding back the compatibility shim from 38 will probably not be sufficient (not only will it probably not compile, it won't link either due to required C++11 atomics support). However, I'd still like to get other compiler choices available since MacPorts is kind of a rickety platform base and always subject to some degree of breakage. If people try Sevan's pkgsrc gcc or Misty's Tigerbrew gcc, I'd like to hear your comments about how well they functioned and/or what was needed to get them working.

http://tenfourfox.blogspot.com/2016/03/progress-to-tenfourfox-45-milestone-1.html


James Long: RIP Over-Engineered Blog

Пятница, 01 Апреля 2016 г. 03:00 + в цитатник

Many of you know that I like building my own dynamic blog engine. If these 3 posts are any indication, I majorly refactor it about every 8 months. At least for the last few years I used the same database layer: a simple redis instance that stores posts and metadata. Before my own engine, I used many other systems like Jekyll but I got fed up with them. (I’ve looked at many other static site generators, but I still get fed up with them.)

Static site generators are nice, but they don’t really reduce complexity. They just shift it to compile-time instead of run-time. And for certain use cases (mine in particular), implementing them at compile-time is doable but unnecessarily complex. If you think of your data as a graph, you need to generate every single view of this graph every single time, requiring several plugins and ad-hoc fixes. With a large number of posts, compiling becomes slow and live preview isn’t really live any more. Worst of all is when your generator breaks because of an Xcode update and now you can’t deploy your post.

I totally get why people like static site generators. For infrequent posting, it’s nice to throw it onto github pages. But running a cheap linode or digital ocean server is super easy, and if you’re willing to do that it’s actually much easier to run a server. I recommend getting some sysops experience anyway.

You know the amount of times my site has legitimately crashed in 3-4 years? Once. And that’s because nginx logs filled up the disk space, so it would have happened anyway if I was using a static site generator.

On the other side of the spectrum, you have services like Medium but I like owning my content.

All of this makes more sense in context. I had a vision for my blog. I wanted to build something where I could easily write rich, interactive programming tutorials. Something like Khan Academy but suited to specific concepts like React. I was never sure what that looked like, but I knew I needed to write my own code. I thought about writing a compiler that converted markdown into React components, and the ability to extend markdown syntax with custom React components for a specific tutorial. But that never happened.

Along the way I wrote several great interactive tutorials like Removing User Interface Complexity, or Why React is Awesome, Taming the Asynchronous Beast with CSP Channels in JavaScript, and Writing Your First Sweet.js Macro. These were huge for me, especially the React one because it had a noticeable impact on React adoption in the early days. It’s been great for my career and led to many opportunities for speaking and meeting people.

Those tutorials were hacked together though, and I always meant to build features into my blog to make writing them easier. But I never did that, and I’m not going to. In fact, each tutorial currently is its own isolated little app, which is actually nice.

That leads me to this past year. I haven’t written any interactive tutorials, and my blog has become a complex beast that I only use to learn new libraries by integrating them. I’ve been completely frank that my blog is over-engineered, and if you look at one of earlier refactorings the technologies listed is like a list of bookmarks you’d find in my pinboard.in: react, js-csp (CSP channels), mori (persistent data structures), sweet.js (macros), etc. I can’t help but laugh out how shamelessly I followed interesting tech.

It’s time to move on, though. These days I just want to push words, and the occasional interaction is just a

https://tracking.feedpress.it/link/9494/2973320


Jordan Lund: being productive when distributed teams get together

Пятница, 01 Апреля 2016 г. 00:32 + в цитатник

I'm a Mozilla Release Engineer. Which means I am a strong advocate for remote teams. Our team is very distributed and I believe we are successful at it too.

Funnily enough, I think a big part of the distributed model includes getting remote people physically together once in a while. When you do, you create a special energy. Unfortunately that energy can sometimes be wasted.

Have you ever had a work week that goes something like the following:

You stumble your way to the office in a strange environment on day one. You arrive, find some familiar faces, hug, and then pull out your laptop. You think, 'okay, first things first, I'll clear my email, bugmail, irc backscroll, and maybe even that task I left hanging from the week before.'

At some point, someone speaks up and suggests you come up with a schedule for the week. A google doc is opened and shared and after a few 'bikeshedding' moments, it's lunch time! A local to the area or advanced yelper in the group advertises a list of places to eat and after the highest rated food truck wins your stomach's approval, you come back to the office and ... quickly check your email.

The above scenario plays out in a similar fashion for the remainder of the week. Granted, I exaggerate and some genuine ideas get discussed. Maybe even a successful side sprint happens. But I am willing to bet that you, too, have been to a team meet up like this.

So can it be done better? Well I was at one recently in Vancouver and this post will describe what I think made the difference.

forest firefighting

Prior to putting out burning trees at Mozilla, I put out burning trees as a Forest Firefighter in Canada. BC Forest Protection uses the Incident Command System (ICS). That framework enabled us to safely and effectively suppress wildfires. So how does it work and why am I bringing it up? Well, without this framework, firefighters would spend a lot of time on the fire line deciding where to camp, what to eat, what part of the fire to suppress first, and how to do it. But thanks to ICS, these decisions are already made and the firefighters can get back to doing work that feels productive!

You can imagine how team meet ups could benefit from such organization. With ICS, there are four high level branches: Logistics, Planning, Operations, and Finance & Administration. The last one doesn't really apply to our 'work week' scenario as we use Egencia prior to arriving and Expensify after leaving so it doesn't really affect productivity during the week. However, let's dive into the other three and discover how they correlate to team meet ups.

For each of these branches, someone should be nominated or assigned and complete the branch responsibilities.

Logistics

Ideally the Logistics lead should be someone who is local to the area or has been there before. This person is required to create an etherpad/Google Doc that:

  • proposes a hotel near the office
    • describe the hotel
    • provide directions from the airport. map screenshots encouraged
    • provide directions from the hotel to the office.
  • propose restaurants to eat for each day of the week
    • poll food restrictions within the team
    • reserve the restaurants in advanced
  • work with the office Work Place Resource
    • book a room/space within the office
    • sign up team to office lunches
    • get key cards/fobs assigned and ready to be handed out
  • send out an email with link to the doc that contains all this information.

Now you might be saying, "wait a second, I can do all those things myself and don't need to be hand held." And while that is true, the benefit here is you reduce the head space required on each individual, the time spent debating, and you get everyone doing the same thing at the same time. This might not sound very flexible or distributed but remember, that's the point; you're centralized and together for the week! You might also be thinking "I really enjoy choosing a hotel and restaurant." That's fine too, but I propose you coordinate with the logistics assignee prior to the work week rather than spend quality work week time on these decisions.

Planning

Now that you have logistics sorted, it's time to do all the action planning. Traditionally we've had work weeks where we pre-plan high level goals we want to accomplish but we don't actually fill out the schedule until Monday as a group. The downside here is this can chew up a lot of time and you can easily get side track before completing the schedule. So, like Logistics, assign someone to Planning.

This person is required to create a [insert issue tracker of choice] list and determine a bugs/issues that should be tackled during the week. The way this is done of course depends on the issue tracker, style of the group, and type of team meet up but here is an example we used for finishing a deliverable related goal.

write a list of issues for each of the following categories:

  • hard blockers
  • nice to haves
  • work in progress
  • done but needs to be verified
  • completed

For the above, we used Trello which is nice as it's really just a board of sticky notes. I could write a whole separate blog post on how to to be effective with it by incorporating bugzilla links, assignees to cards, tags, sub-lists, and card voting but for now, here is a visual example:

Trello Work Week Board

The beauty here is that all of the tasks (sticky notes) are done upfront and each team member simply plucks them off the 'hard blockers' and 'nice to have' lists one by one, assigns them to themselves, and moves them into the completed section.

No debating or brainstorming what to do, just sprint!

Operations

The Operations assignee here should:

  • be a proxy to the outside world
  • be a moderator internally

If you want to take advantage of a successful physical team meetup, forget about the communication tools that are designed for distributed environments.

During the work week I think it is best to ignore email, bug mail, and irc. Treat the week like you are on PTO: change your bugzilla name and create a vacation responder. Have the Operations assignee respond to urgent requests and act as a proxy to the outside world.

It is also nice to have the Operations assignee moderate internally by constantly iterating over the trello board state, grasping what needs to be done, where you are falling behind, and what new things have come up.

Vancouver by accident

This model wasn't planned or agreed upon prior to the Vancouver team meetup. It actually just happened by accident. I (jlund) took on the Logistics, rail handled Planning, and catlee acted as that sort of moderator/proxy role in Operations. Everyone at the meet up finished the week satisfied and I think hungry to try it again.

I'm looking forward to using a framework like this in the future. What's your thoughts?

http://jordan-lund.ghost.io/how-to-make-the-most-of-getting-remotees-together/


About:Community: Participation Lab Notes: The Sweet Spot Between Constraint and Freedom

Пятница, 01 Апреля 2016 г. 00:18 + в цитатник

Over the past few months you might have heard about a growing community called the Community Design Group. What sets this group apart from many of the other communities at Mozilla is that it does not exist in a region, club, or wiki page. It has only the loosest “membership” and you can come, participate, and leave without ever sharing your name.

The Community Design Group is a sort of transactional marketplace that lives primarily on GitHub. Requests for design can be made by contributors and staff, likewise anyone can tackle or contribute to any design problem that emerges into the Repo.

As Elio Qoshi, the community-side driver of the Community Design Group writes in his blog post:

This allows for a quick contribution path for new contributors in a decentralized manner.  Different labels determine what kind of context issues have, neatly sorted in UX, IX, UI and general Graphic Design, to allow contributors from various backgrounds to jump in.

Since it’s inception in mid-January 25 design problems have been closed ranging from the design of full websites, logos, and style guides. In the past 30 days alone the Repo has received over 2,000 views and 1,000 unique visitors and it exists and flourishes with only minimal staff and community-leader involvement.

MozillaSecurityLogo

CC-BY-SA Elio Qoshi

Despite this early success there is still a great deal to be learned about this model. One of the early realizations about the Community Design Group was the tendency of the community to focus on the creation of logos. These fun and creative projects are excellent opportunities for designers to show their skills and learn from each other, but could potentially dilute the already somewhat crowded logo-sphere of Mozilla.

CC-BY-SA Jenn Sanford

CC-BY-SA Jenn Sanford

Rather than put any stop-energy into the community, we are experimenting with sharing more high-impact opportunities for contribution, initially in the form of a challenge related to the re-branding of Mozilla. In this challenge we’re asking the design community to suggest a new iconic symbol for Mozilla that will shape the thinking of the Creative Team.

In contrast to the well-defined, quick victory, of logos this challenge is ill-defined and complex.
This challenge will close at the end of April and we will have the opportunity to see not only what emerges from the creative pool, but how this work is integrated into the work of the staff team.

CC-BY-SA Sashoto Seeam

CC-BY-SA Sashoto Seeam

The basis of the Community Design Group project was to see what would come of a simple place with minimal rules, where a decentralized community could come to share and create. It is a testament to the power of minimal restriction and functional tools. Over the next quarter I look forward to seeing how far it will grow, and what amazing contributions will emerge from this new community.

http://blog.mozilla.org/community/2016/03/31/participation-lab-notes-the-sweet-spot-between-constraint-and-freedom/


Air Mozilla: Kyle Drake on the Distributed Web

Четверг, 31 Марта 2016 г. 22:00 + в цитатник

Kyle Drake on the Distributed Web The web has unified the entire world into a single medium of communication, standardizing both how we distribute and present information. This dropped the cost...

https://air.mozilla.org/kyle-drake-the-distributed-web/


Air Mozilla: Web QA Weekly Meeting, 31 Mar 2016

Четверг, 31 Марта 2016 г. 19:00 + в цитатник

Web QA Weekly Meeting This is our weekly gathering of Mozilla'a Web QA team filled with discussion on our current and future projects, ideas, demos, and fun facts.

https://air.mozilla.org/web-qa-weekly-meeting-20160331/


Air Mozilla: Reps weekly, 31 Mar 2016

Четверг, 31 Марта 2016 г. 19:00 + в цитатник

Reps weekly This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

https://air.mozilla.org/reps-weekly-20160331/


Gervase Markham: Happy Birthday Mozilla!

Четверг, 31 Марта 2016 г. 18:49 + в цитатник

Mozilla is 18 today. I’ll drink to that! :-)[0]


[0] This reference may not work in your jurisdiction.

http://feedproxy.google.com/~r/HackingForChrist/~3/RQR-gOxF2EU/


Henrik Skupin: Review of Firefox desktop automation work – Q1 2016

Четверг, 31 Марта 2016 г. 17:10 + в цитатник

Today is the last day of Q1 2016 which means time to review what I have done during all those last weeks. When I checked my status reports it’s kinda lot, so I will shorten it a bit and only talk about the really important changes.

Build System / Mozharness

After I had to dig into mozharness to get support for Firefox UI Tests during last quarter I have seen that more work had to be done to fully support tests which utilize Nightly or Release builds of Firefox.

The most challenging work for me (because I never did a build system patch so far) was indeed prefixing the test_packages.json file which gets uploaded next to any nightly build to archive.mozilla.org. This work was necessary because without the prefix the file was always overwritten by later build uploads. Means when trying to get the test archives for OS X and Linux always the Windows ones were returned. Due to binary incompatibilities between those platforms this situation caused complete bustage. No-one noticed that until now because any other testsuite is run on a checkin basis and doesn’t have to rely on the nightly build folders on archive.mozilla.org. For Taskcluster this wasn’t a problem.

In regards of firefox-ui-tests I was finally able to get a test task added to Taskcluster which will execute our firefox-ui-tests for each check-in and this in e10s and non-e10s mode. Due to current Taskcluster limitations this only runs for Linux64 debug, but that already helps a lot and I hope that we can increase platform coverage soon. If you are interested in the results you can have a look at Treeherder.

Other Mozharness specific changes are the following ones:

  • Fix to always copy the log files to the upload folder even in case of early aborts, e.g. failed downloads (bug 1230150)
  • Refactoring of download_unzip() method to allow support of ZipFile and TarFile instead of external commands (bug 1237706)
  • Removing hard requirement for the –symbols-url parameter to let mozcrash analyze the crash. This was possible because the minidump_stackwalk binary can automatically detect the appropriate symbols for nightly and release builds (bug 1243684)

Firefox UI Tests

The biggest change for us this quarter was the move of the Firefox UI tests from our external Github repository to mozilla-central. It means that our test code including the harness and Firefox Puppeteer is in sync with changes to Firefox now and regressions caused by ui changes should be very seldom. And with the Taskcluster task as mentioned above it’s even easier to spot those regressors on mozilla-inbound.

The move itself was easy but keeping backward compatibility with mozmill-ci and other Firefox branches down to mozilla-esr38 was a lot of work. To achieve that I first had to convert all three different modules (harness, puppeteer, tests) to individual Python packages. Those got landed for Firefox 46.0 on mozilla-central and then backported to Firefox 45.0 which also became our new ESR release. Due to backport complexity for older branches I decided to not land packages for Firefox 44.0, 43.0, and 38ESR. Instead those branches got smaller updates for the harness so that they had full support for our latest mozharness script on mozilla-central. Yes, in case you wonder all branches used mozharness from mozilla-central at this time. It was easier to do, and I finally switched to branch specific mozharness scripts later in mozmill-ci once Firefox 45.0 and its ESR release were out.

Adding mach support for Firefox UI Tests on mozilla-central was the next step to assist in running our tests. Required arguments from before are now magically selected by mach, and that allowed me to remove the firefox-ui-test dependency on firefox_harness, which was always a thorn in our eyes. As final result I was even able to completely remove the firefox-ui-test package, so that we are now free in moving our tests to any place in the tree!

In case you want to know more about our tests please check out our new documentation on MDN which can be found here:

https://developer.mozilla.org/en-US/docs/Mozilla/QA/firefox-ui-tests

Mozmill CI

Lots of changes have been done to this project to accommodate the Jenkins jobs to all the Firefox UI Tests modifications. Especially that I needed a generic solution which works for all existing Firefox versions. The first real task was to no longer use the firefox-ui-tests Github repository to grab the tests from, but instead let mozharness download the appropriate test package as produced and uploaded with builds to archive.mozilla.org.

It was all fine immediately for en-US builds given that the location of the test_packages.json file is distributed along with the Mozilla Pulse build notification. But it’s not the case for l10n builds and funsize update notifications. For those we have to utilize mozdownload to fetch the correct URL based on the version, platform, and build id. So all fine. A special situation came up for update tests which actually use two different Firefox builds. If we get the tests for the pre build, how can we magically switch the tests for the target version? Given that there is no easy way I decided to always use the tests from the target version, and in case of UI changes we have to keep backward compatibility code in our tests and Firefox Puppeteer. This is maybe the most ideal solution for us.

Another issue I had to solve with test packages was with release candidate builds. For those builds Release Engineering is not uploading nor creating any test archive. So a connection had to be made between candidate builds and CI (tinderbox) builds. As turned out the two properties which helped here are the revision and the branch. With them I at least know the changeset of the mozilla-beta, mozilla-release, and mozilla-esr* branches as used to trigger the release build process. But sadly that’s only a tag and no builds nor tests are getting created. Means something more is necessary. After some investigation I found out that Treeherder and its Rest API can be of help. Using the known tag and walking back the parents until Treeherder reports a successful build for the given platform, allowed me to retrieve the next possible revision to be used with mozdownload to retrieve the test_packages.json URL. I know its not perfect but satisfies us enough for now.

Then the release promotion project as worked on by the Release Engineering team was close to be activated. I heard a couple of days before, that Firefox 46.0b1 will be the first candidate to get it tested on. It gave me basically no time for testing at all. Thanks to all the support from Rail Aliiev I was able to get the new Mozilla Pulse listener created to handle appropriate release promotion build notifications. Given that with release promotion we create the candidates based on a signed off CI build we already have a valid revision to be used with mozdownload to retrieve the test_packages.json file – so no need for the above mentioned Treeherder traversal code. \o/ Once all has been implemented Firefox 46.0b3 was the first beta release for which we were able to process the release promotion notifications.

At the same time with release promotion news I also got informed by Robert Kaiser that the ondemand update jobs as performed with Mozmill do not work anymore. As turned out a change in the JS engine caused the bustage for Firefox 46.0b1. Given that Mozmill is dead I was not going to update it again. Instead I converted the ondemand update jobs to make use of Firefox-UI-Tests. This went pretty well, also because we were running those tests already for a while on mozilla-central and mozilla-aurora for nightly builds. As result we were able to run update jobs a day later for Firefox 46.0b1 and noticed that nearly all locales on Windows were busted, so only en-US got finally shipped. Not sure if that would have been that visible with Mozmill.

Last but not least I also removed the workaround which let all test jobs use the mozharness script from mozilla-central. It’s simply not necessary anymore given that all required features in mozharness are part of ESR45 now.

What’s next

I already have plans what’s next. But given that I will be away from work for a full month now, I will have to revisit those once I’m back in May. I promise that I will also blog about them around that time.

http://www.hskupin.info/2016/03/31/review-of-firefox-desktop-automation-work-q1-2016/


Support.Mozilla.Org: What’s up with SUMO – 31st March

Четверг, 31 Марта 2016 г. 16:15 + в цитатник

Hello, SUMO Nation!

March is gone, and we had a little break in posting last week, due to travelling and training. Fear not, we’re back, and there’s at least one more post coming your way this week. Here are all the most recent news and updates from the world of SUMO…

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Most recent SUMO Community meeting

The next SUMO Community meeting…

  • …is happening on WEDNESDAY the 6th of April – join us!
  • Reminder: if you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.

Developers

Community

  • Nominations for the London Work Week are locked down and will be sent out soon – the Participation team will contact everyone in early April – in the meantime, take a look at this wiki page for more details.
  • Ongoing reminder: if you think you can benefit from getting a second-hand device to help you with contributing to SUMO, you know where to find us.

Social

Support Forum

  • No new news – we’re still waiting for the plugin check page updates… Bear with us (like a grizzly would)!

Knowledge Base & L10n

Firefox

Now, remember to play nice and don’t make too many crazy jokes on the 1st of April… And don’t believe (almost) anything you see online that day! The next 24 hours belong to imagination and pranksters. Good luck to us all & see you on the definitely more serious end of April 1st ;-)

https://blog.mozilla.org/sumo/2016/03/31/whats-up-with-sumo-31st-march/


Air Mozilla: Connected Devices Meetup: Raspberry Pi with Node.js presented by Bryan Hughes

Четверг, 31 Марта 2016 г. 04:30 + в цитатник

Connected Devices Meetup: Raspberry Pi with Node.js presented by Bryan Hughes Raspberry Pi with Node.js presented by Bryan Hughes in the San Francisco commons.

https://air.mozilla.org/connected-devices-meetup-raspberry-pi-with-node-js-presented-by-bryan-hughes/



Поиск сообщений в rss_planet_mozilla
Страницы: 472 ... 255 254 [253] 252 251 ..
.. 1 Календарь