-Поиск по дневнику

Поиск сообщений в rss_planet_mozilla

 -Подписка по e-mail

 

 -Постоянные читатели

 -Статистика

Статистика LiveInternet.ru: показано количество хитов и посетителей
Создан: 19.06.2007
Записей:
Комментариев:
Написано: 7

Planet Mozilla





Planet Mozilla - https://planet.mozilla.org/


Добавить любой RSS - источник (включая журнал LiveJournal) в свою ленту друзей вы можете на странице синдикации.

Исходная информация - http://planet.mozilla.org/.
Данный дневник сформирован из открытого RSS-источника по адресу http://planet.mozilla.org/rss20.xml, и дополняется в соответствии с дополнением данного источника. Он может не соответствовать содержимому оригинальной страницы. Трансляция создана автоматически по запросу читателей этой RSS ленты.
По всем вопросам о работе данного сервиса обращаться со страницы контактной информации.

[Обновить трансляцию]

Anthony Hughes: 9 Years

Вторник, 10 Мая 2016 г. 01:24 + в цитатник

Today I pause to reflect.

9 years ago last Saturday was my first day at Mozilla as an intern. It was also the first time I set foot in Silicon Valley, the first time I set foot in California, even the first time I set foot on the west coast of North America. It was a day of many emotions: anticipation, excitement, fear, and pride.

Taking big risks is not something that comes easy to me. Deciding to hop on a plane and travel to California not knowing what my future held. Deciding only a few months earlier to take a risk with my career and venture in to open source software. Deciding just two years before to risk leaving a stable career in the Canadian military to earn a software development degree at Seneca College. These are things that took a lot of personal courage, and I suspect much courage for my parents as well.

Mozilla has changed a lot since that early Spring day in 2007 but so have I. Through all the things I’ve experienced. The ups and downs. The re-orgs and pivots. The flamewars. The burnout. The one thing that hasn’t changed is the people of Mozilla — they are my rock.

They provided guidance and mentorship when I needed it most. They helped me learn from my mistakes, celebrate my victories, and saved me from burning out on more than one occasion. In short, I would not be here today without them.

Some of my best personal relationships are those I’ve developed at Mozilla. I hold the highest of respect for these people and while many have moved on I still think of them frequently. I strive to be what they were to me: a mentor in work and a mentor in life.

If you’re reading this and we’ve interacted at all in the past, I write this for you. Every experience we’ve shared, every discussion we’ve had, it changes us. This I believe fundamentally.

When people ask me why I work for Mozilla (and why I’ll continue to for the foreseeable future), the answer is simple: the people. To me they are like family, nothing else matters.

 

https://ashughes.com/?p=335


Air Mozilla: Mozilla Weekly Project Meeting, 09 May 2016

Понедельник, 09 Мая 2016 г. 21:00 + в цитатник

Mozilla Addons Blog: Results of the WebExtensions API Survey

Понедельник, 09 Мая 2016 г. 20:13 + в цитатник

In March, we released a survey asking add-on developers which APIs they need to transition successfully to WebExtensions. So far, 235 people have responded, and we’ve summarized some of the findings in these slides.

Developers with the most add-ons responded at a disproportionate rate. Those with 7 or more add-ons represent 2% of the add-on developer community, and those with 4-6 add-ons represent 3%, but they comprised 36.2% of survey respondents. This didn’t come as a surprise, since the most active developers are also the most engaged and have the most to gain by migrating to WebExtensions.

How many add-ons have you worked on?

Nearly half of respondents have tried implementing their add-ons in Chrome, and the most cited limitation is that it’s restrictive. Developers could not do much with the UI other than add buttons or content tabs. We intend to offer APIs that give developers more freedom to customize their add-ons, and these results tell us we’re on the right track.

In the coming months, we’ll draw on these results to inform our decisions and priorities, to ensure WebExtensions lives up to it promise…and goes beyond.

https://blog.mozilla.org/addons/2016/05/09/results-of-the-webextensions-api-survey/


Chris Cooper: RelEng & RelOps Weekly highlights - May 9, 2016

Понедельник, 09 Мая 2016 г. 20:12 + в цитатник

It has begun!!!
The interns are coming! The interns are coming! It’s too late, they’re already here!

Yes, intern season has begun. Releng welcomes Francis Kang and Connor Sheehan for the summer. They will be working on the long tail of release promotion tasks to start. Kim and Rail will be mentoring them. Our other (returning) intern, Anthony Miyaguchi, joins us next month.

In other news…

Modernize infrastructure:

A few weeks ago, Mozilla announced that in August it will be dropping support for older versions of Mac OS X, specifically versions 10.6 - 10.8. Alin landed code this week that disables 10.6 testing for Firefox 49. Turning off entire platforms is not something we do lightly (or frequently), but I will admit, it does make releng life easier when we do. Eventually.

Mozilla continues to discuss the future of XP support. Many more users would be affected than with OS X, but since the OS itself is no longer supported by Microsoft, there is only so much Mozilla can do to provide a secure browser on an inherently insecure platform. It’s also a huge burden on developers to make new features work (or provide an alternative) on an aging/ancient platform. Lots of factors to consider here to find balance.

Improve CI Pipeline:

Aki released version 0.1.0 of scriptworker. Scriptworker is an async python TaskCluster worker, designed for specific Release Engineering needs such as signing and interacting with our update servers (Balrog).

Operational:

Vlad added five new Mac test masters to spread the load for existing machines as well as providing capacity for the new Mac test machines that will soon be installed in our data centre. We’ve had very high pending counts for Mac tests recently, so having more machines on which to run those tests, as well as more masters to defray that load, should help alleviate the chronic backlog. (https://bugzil.la/1264417)

Release:

Shipped Thunderbird 45.1b1, Fennec 47.0b2, Firefox 47.0b2, Firefox 47.0b3, Firefox 46.0.1, Fennec 46.0.1, and Firefox 45.1.1esr. Check out the release notes for more details:

See you next week!

http://coopcoopbware.tumblr.com/post/144103600905


Giorgos Logiotatidis: Static site hosting on Dokku and Deis

Понедельник, 09 Мая 2016 г. 15:00 + в цитатник

Deis and Dokku, the open source Heroku-like PaaS, can be used for hosting static sites too. Since they both support Dockerfile based deployments all we need is an Docker Image with Nginx.

I created giorgos/dokku-static-site which uses the ONBUILD instruction. To use it create a Dockerfile at the root directory of your static site with only one line:

FROM giorgos/dokku-static-site

and then place all your files under the html directory. If moving your website files to another directory isn't in your plans, you can alternatively create a symbolic link

ln -s . html

Then push to your Deis / Dokku server as usual and it will download the image and do the rest.

git push dokku master

The image works nicely with Dokku's letencrypt plugin too, no need to worry about ports or custom Nginx configurations.

giorgos/dokku-static-site is based on debian:stable and a cron job will trigger a rebuild every hour to always be up-to-date with latest security patches.

You can find more information about Deis Dockerfile Deployments and Dokku Dockerfile Deployments in each project's documentation.

https://giorgos.sealabs.net/static-site-hosting-on-dokku-and-deis.html


QMO: Firefox 47 Beta 3 Testday Results

Понедельник, 09 Мая 2016 г. 12:21 + в цитатник

Hey everyone!

Last Friday, May 6th, we held Firefox 47 beta 3 Testday, and, of course, it was another outstanding event!

A big THANK YOU goes out to Comorasu Cristian-Iulian, Luna Jernberg, Vuyisile Ndlovu, Iryna Thompson, Moin Shaikh, Rezaul Huque Nayeem, Nazir Ahmed Sabbir, Hossain Al Ikram, Azmina Akter Papeya, Saddam Hossain, Majedul islam, Tarikul Islam Oashi, Jobayer Ahmed Mickey, Kazi Nuzhat Tasnem, Syed Nayem Roman, Sayed Ibn Masud, Tanvir Rahman, Tazin Ahmed, Md Rakibul Islam, Mohammad Maruf Islam, Almas Hossain, Maruf Rahman, Sajedul Islam, Forhad Hossain, Md. Raihan Ali, Wahiduzzaman Hridoy, Mahfuza Humayra Mohona, Fahim, Asif Mahmud Shuvo, Mohammed Jawad Ibne Ishaque, Zayed News, Md. Rahimul islam and Akash.

Also, thanks to all our active moderators too!

Results:

  • some potential issues (currently under investigation) were noticed while testing Synced Tabs Sidebar, and none for Youtube Embedded Rewrite feature.
  • 2 bugs were verified: 1227477 and 1240729

I strongly advise everyone of you to reach out to us, the moderators, via #qa during the events when you encountered any kind of failures. Keep up the great work!

And keep an eye on QMO for upcoming events! \o/

https://quality.mozilla.org/2016/05/firefox-47-beta-3-testday-results/


Gervase Markham: Eurovision Bingo (again)

Понедельник, 09 Мая 2016 г. 11:22 + в цитатник

Some people say that all Eurovision songs are the same. (And some say all blog posts on this topic are the same…) That’s probably not quite true, but there is perhaps a hint of truth in the suggestion that some themes tend to recur from year to year. Hence, I thought, Eurovision Bingo.

I wrote some code to analyse a directory full of lyrics, normally those from the previous year of the competition, and work out the frequency of occurrence of each word. It will then generate Bingo cards, with sets of words of different levels of commonness. You can then use them to play Bingo while watching this year’s competition (which is on Saturday).

There’s a Github repo, or if you want to go straight to pre-generated cards for this year, they are here.

Here’s a sample card from the 2014 lyrics:

fell cause rising gonna rain
world believe dancing hold once
every mean LOVE something chance
hey show or passed say
because light hard home heart

Have fun :-)

http://feedproxy.google.com/~r/HackingForChrist/~3/4WHT5D0IWig/


This Week In Rust: This Week in Rust 129

Понедельник, 09 Мая 2016 г. 07:00 + в цитатник

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

This week's edition was edited by: Vikrant and llogiq.

Updates from Rust Community

News & Blog Posts

Notable New Crates & Project Updates

Crate of the Week

This week's Crate of the Week is semantic-rs, which lets us update our project from the commandline ensuring semver compliance on the way. Thanks to Florian Gilcher for the suggestion!

Submit your suggestions for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

92 pull requests were merged in the last two weeks.

New Contributors

  • Brandon Edens
  • Garrett Squire
  • jonathandturner
  • Nerijus Arlauskas
  • Philipp Matthias Schaefer
  • Stephen Mather
  • Taylor Cramer
  • Wang Xuerui

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

fn work(on: RustProject) -> Money

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

No quote was selected for QotW.

Submit your quotes for next week!

https://this-week-in-rust.org/blog/2016/05/09/this-week-in-rust-129/


John O'Duinn: HOWTO fix “scanner warmup” error on HP 3330 printer

Понедельник, 09 Мая 2016 г. 05:00 + в цитатник

My trusty HP 3330 printer stopped working recently with a “scanner warmup” error displayed on the display. Prior to that, it literally worked flawlessly for years, so I was reluctant to simply go buy a replacement printer. Once I figured out these steps, repairing the printer took me under 10 minutes, end-to-end, using only very simple low tech tools: a needlenose pliers, a philips screwdriver and a cotton bud.

Here’s the steps I followed:

Unplug the printer. Yes, you do have to follow basic safety procedures, even if there will be no exposed electric wires to deal with. You don’t want to accidentally have the motor start moving parts around while you are trying to work on them. Plus you need to make sure the power is completely off at the bulb, so mirrors have cooled off completely before you get to them in later steps.

Open the lid. While the lid is as vertical as possible, grip and pull up the two black tabs (in red circles). They should each come up about ~1/2 inch. Once these are both up, you should be able to grab the lid, pull it up and remove. Set aside.

Use a Philips screwdriver to remove this one screw. Once the screw is removed, gently lift up at the side where the screw was. As you do, you’ll notice that the opposite side hinges up on the side nearest the main glass plate. After you lift up about an inch or so, you’ll notice you can gently lift/disconnect this small glass panel. Set aside.

Notice that you can now reach a narrow rubber belt that is used to move the mirror assembly back and forth under the main glass platter. Use the needlenose pliers to grip the nearside (circled) part of the belt and pull it towards the pulley wheel. Be very gentle here, as this rubber belt is fragile. You’ll only be able to pull it about an inch or so before you run out of space, need to release the pliers grip, reposition the pliers, grip and pull the rubber belt another inch. Each time you do this, you pull the mirror assembly a little bit closer to the opening. Do another pull. And another. And another. Keep repeating until the mirror assembly is fully accessible through the opening.

Use the cotton bud to gently clean the full length of each mirror surface. It doesn’t take much effort, and the mirrors are fragile, so be gentle. Simply place the cotton bud at one end of the mirror, and slide along the mirror to the other end. Think of it more like dusting fragile crystal glasses. There are three mirrors in all, and you should do all three while you are at this.

Now that you are done, you need to reassemble everything. There is no need to use the pliers on the belt to move the mirrors back to the original starting position, the mirror will automatically go to the right “home” position once the printer is turned back on. Simple replace the glass plate, screw it back down and re-attach the lid.

Power the printer back on, and hopefully now it works!

John.
ps: here’s a more detailed HP 3330 printer disassembly video for the brave: https://www.youtube.com/watch?v=lqG_nmi3vC4

http://oduinn.com/blog/2016/05/08/howto-fix-hp-3330-printer/


Karl Dubost: [worklog] Golden Week working at half speed.

Понедельник, 09 Мая 2016 г. 04:28 + в цитатник

Golden week is the highlight of Japanese holidays (1 week), which is always a bit silly for me (French origin). No matter how long I have lived in North America or Japan, I still think that something wrong with 1 week or even 10 days of holidays (compared to the current 5 weeks in France). Tune of the week: Izumi Yukimura: Ain't Cha-Cha Coming Out T-tonight.

Webcompat Life

Progress this week:

Today: 2016-05-09T10:15:31.053288
366 open issues
----------------------
needsinfo       4
needsdiagnosis  109
needscontact    32
contactready    87
sitewait        130
----------------------

You are welcome to participate

Londong agenda.

Webcompat issues

(a selection of some of the bugs worked on this week).

Webcompat development

Reading List

Follow Your Nose

TODO

  • Document how to write tests on webcompat.com using test fixtures.
  • ToWrite: rounding numbers in CSS for width
  • ToWrite: Amazon prefetching resources with for Firefox only.

Otsukare!

http://www.otsukare.info/2016/05/06/worklog-goldenweek


The Servo Blog: This Week In Servo 62

Понедельник, 09 Мая 2016 г. 03:30 + в цитатник

In the last week, we landed 133 PRs in the Servo organization’s repositories.

Planning and Status

Our overall roadmap and quarterly goals are available online.

This week’s status updates are here.

Notable Additions

  • ms2ger added support for using Gecko’s string atoms in rust-selectors, which is a major performance issue for reusing Servo’s style system in Firefox
  • autrilla changed our build driver script, mach, to display more information when there is a virtualenv or python failure
  • larsberg increased the number of parallel processes in use on our Mac builders which, in conjunction with some other system tuning, reduces the OSX builder times to around 30 minutes
  • mbrubeck fixed a case where layout could cause more nodes to be laid out than were necessary
  • glennw switched the Android build to officially use OpenGL ES3 now that the corresponding WebRender support landed
  • heycam added support for text-transform in geckolib
  • mmatyas upgraded Cargo, to pick up much better support for per-target build configurations
  • dati updated the WebBluetooth implementation significantly to pick up spec changes
  • jdm implemented basic infrastructure
  • aneeshusa made the Vagrant support for testing our CI builders more full-featured
  • connorgbrewster corrected some mistakes that prevented logging into sites like Twitter and Github
  • pcwalton rewrote significant parts of float-related layout to be more correct
  • nox upgraded SpiderMonkey to use version 48
  • jdm implemented support for controlling the visibility of content-exposed APIs components via preferences

New Contributors

Get Involved

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Screenshot

Screenshot of Firefox browsing wikipedia using Servo’s Stylo style system implementation: (screenshot)

Meetings and Mailing List

https://blog.servo.org/2016/05/09/twis-62/


The Rust Programming Language Blog: Launching the 2016 State of Rust Survey

Понедельник, 09 Мая 2016 г. 03:00 + в цитатник

Rust’s first birthday is upon us (on May 15th, 2016), and we want to take this opportunity to reflect on where we’ve been, and where we’re going. The Rust core team plans a post next week giving some of their perspective on the year. But we are also eager to hear from you.

Thus, as part of the celebrations, the community team is pleased to announce the official 2016 State of Rust Survey! Whether or not you use Rust today, we want to know your opinions. Your responses will help the project understand its strengths and weaknesses, and to establish development priorities for the future.

Completing this survey should take about 5 to 10 minutes, and is anonymous unless you choose to give us your contact information. We will be accepting submissions until June 8th, 2016. If you have any questions, please feel free to email the Rust Community team at community-team@rust-lang.org.

Please help us spread the word by sharing the above link on your social network feeds, at meetups, around your office and in other communities.

Thanks, everyone who helped develop, polish, and test the survey! Once it closes, we will summarize and visualize the results here on http://blog.rust-lang.org/.

Happy birthday, Rust. Have another great year.

http://blog.rust-lang.org/2016/05/09/survey.html


Armen Zambrano: Installing Vidyo on Ubuntu 16.06 LTS (dependency libqt4-gui unmet)

Пятница, 06 Мая 2016 г. 19:21 + в цитатник
I've recently upgraded to Ubuntu 16.06 LTS from Ubuntu 14.04 LTS.
The only package I've noticed to be missing is vidyodesktop.

In order to install it, I tried going to v.mozilla.com and open the .deb file.
Unfortunately, it would not install.
In order to determine why it was failing I run this:
sudo dpkg -i VidyoDesktopInstaller-ubuntu64-TAG_VD_3_3_0_027.deb 
Unfortunately it said that libqt4-gui needed to be installed, however. the package is not available for this version of Ubuntu.
In order to fix it, rail told me that we had to install a dummy package to fool Vidyo. This can be accomplished with equivs package.

  • Install equivs package
sudo apt-get install equivs
  • Generate a control file
equivs-control libqt4-gui
  • Edit libqt4-gui and tweak the following variables: Package, Version, Description. Example file
### Commented entries have reasonable defaults.
### Uncomment to edit them.
# Source:
Section: misc
Priority: optional
# Homepage:
Standards-Version: 3.9.2
Package: libqt4-gui
Version: 4.8.1
# Maintainer: Your Name
# Pre-Depends:
# Depends:
# Recommends:
# Suggests:
# Provides:
# Replaces:
# Architecture: all
# Multi-Arch:
# Copyright:
# Changelog:
# Readme:
# Extra-Files:
# Files:
#
Description: fake package to please vidyo
long description and info. 
 second paragraph 
  • Build the deb
equivs-build libqt4-gui
  • Install the deb
sudo dpkg -i libqt4-gui_4.8.1_all.deb


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

http://feedproxy.google.com/~r/armenzg_mozilla/~3/O8yFIRjmPnw/installing-vidyo-on-ubuntu-1606-lts.html


Support.Mozilla.Org: What’s Up with SUMO – 5th May

Четверг, 05 Мая 2016 г. 21:20 + в цитатник

Hello, SUMO Nation!

Did you have a good post-post release week? We sure did :-) Can you still remember Firefox 1.0? We’re getting to 50.0, soon! I wonder if there will be cake… Mmm, cake.

…anyway, here are the latest and greatest updates from the world of SUMO!

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

We salute you!

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Most recent SUMO Community meeting

The next SUMO Community meeting

  • …is happening on WEDNESDAY the 11th of May – join us!
  • Reminder: if you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.

Community

Social

Support Forum

Firefox

  • for iOS
    • The latest version updates can be found here.
      • Firefox for iOS 4.0 should be going into review by Apple today and launching on May 10th (depends on a lot of factors)
    • Firefox for iOS 5.0 scheduled for approximately 6 weeks after 4.0 hits the interwebs!

…and that’s it for now. Keep rocking the helpful web, while spring rolls over our heads and hearts – and don’t forget to share your favourite music with us!

https://blog.mozilla.org/sumo/2016/05/05/whats-up-with-sumo-5th-may/


Air Mozilla: Web QA Team Meeting, 05 May 2016

Четверг, 05 Мая 2016 г. 19:00 + в цитатник

Web QA Team Meeting Weekly Web QA team meeting - we'll share updates on what we're working on, need help with, are excited by, and perhaps a demo of...

https://air.mozilla.org/web-qa-team-meeting-20160505/


Air Mozilla: Reps weekly, 05 May 2016

Четверг, 05 Мая 2016 г. 19:00 + в цитатник

Reps weekly This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

https://air.mozilla.org/reps-weekly-20160505/


Michael Kohler: Alpha Review – Using Janitor to contribute to Firefox

Четверг, 05 Мая 2016 г. 16:02 + в цитатник

At the Firefox Hackathon in Zurich we used The Janitor to contribute to Firefox. It’s important to note that it’s still in alpha and invite-only.

Screen Shot 2016-05-05 at 14.37.23

The Janitor was started by Jan Keromnes, a Mozilla employee. While still in an alpha state, Jan gave us access to it so we could test run it at our hackathon. Many thanks to him for spending his Saturday on IRC and helping us out with everything!

Once you’re signed up, you can click on “Open in Cloud9” and directly get to the Cloud9 editor who kindly sponsor the premium accounts for this project. Cloud9 is a pure-web IDE based on real Linux environments, with an insanely fast editor.

Screen Shot 2016-05-05 at 14.38.23

Screen Shot 2016-05-05 at 14.38.50

At the hackathon we ran into a Cloud9 “create workspace” limitation, but according to Jan this should be fixed now.

Setting up

After an initial “git pull origin master” in the Cloud9 editor terminal, you can start to build Firefox in there. Simply running “./mach build” is enough. For me this took about 12 minutes for the first time, while my laptop still needs more than 50 minutes to compile Firefox. This is definitely an improvement. Further you won’t need anything else than a browser!

I had my environment ready in about 15 minutes if you count the time to compile Firefox. Comparing this to my previous setups, this solves a lot of dependency-hell problems and is also way faster.

Running the newly compiled Firefox

The Janitor includes a VNC viewer which opens a new tab and you can run your compiled Firefox in there. You can start a shell and run “./mach run” in the Firefox directory and you can start testing your changes.

Screen Shot 2016-05-05 at 14.49.08

Screen Shot 2016-05-05 at 14.50.20

Running ESLint

For some of the bugs we tackled at the hackathon, we needed to run ESLint (well, would be good to run this anyway, no matter what part of the code base you’re changing). The command looks like this:

user@e49de5f6914e:~/firefox$ ./mach eslint --no-ignore devtools/client/webconsole/test/browser_webconsole_live_filtering_of_message_types.js
0:00.40 Running /usr/local/bin/eslint
0:00.40 /usr/local/bin/eslint --plugin html --ext [.js,.jsm,.jsx,.xml,.html] --no-ignore devtools/client/webconsole/test/browser_webconsole_live_filtering_of_message_types.js

/home/user/firefox/devtools/client/webconsole/test/browser_webconsole_live_filtering_of_message_types.js
8:1   warning  Could not load globals from file browser/base/content/browser-eme.js: Error: ENOENT: no such file or directory, open '/home/user/firefox/browser/base/content/browser-eme.js'  mozilla/import-browserjs-globals
8:1   warning  Definition for rule 'mozilla/import-globals' was not found                                                                                                                     mozilla/import-globals
8:1   error    Definition for rule 'keyword-spacing' was not found                                                                                                                            keyword-spacing
18:17  error    content is a possible Cross Process Object Wrapper (CPOW)                                                                                                                      mozilla/no-cpows-in-tests

https://michaelkohler.info/2016/alpha-review-using-janitor-to-contribute-to-firefox


Michael Kohler: Firefox Hackathon Zurich April 2016

Четверг, 05 Мая 2016 г. 15:31 + в цитатник

Last Saturday we’ve held a Firefox Hackathon in Zurich, Switzerland. We’ve had 12 people joining us.

Introduction

At first I gave an introduction to Firefox and introduced the agenda of the hackathon.

Dev Tools Talk

After my talk we heard an amazing talk from Daniele who came from Italy to attend this hackathon. He talked about the Dev Tools and gave a nice introduction to new features!

Hackathon

Before the hackathon we created a list of “good first bugs” that we could work on. This was a great thing to do, since we could give the list to the attendees and they could pick a bug to work on. Setting up the environment to hack was pretty easy. We’ve used “The Janitor” to hack on Firefox, I’ll write a second blog post introducing you to this amazing tool! We ran into a few problems with it, but at the end we all could hack on Firefox!

We worked on about 13 different bugs, and we finished 10 patches! This is a great achievement, we probably couldn’t have done that if we needed more time to set up a traditional Firefox environment. Here’s the full list:

Thanks to everybody who contributed, great work! Also a big thanks to Julian Descolette, a Dev Tools employee from Switzerland who supported us as a really good mentor. Without him we probably couldn’t have fixed some of the bugs in that time!

Feedback

At the end of the hackathon we did a round of feedback. In general the feedback was rated pretty well, though we might have some things to improve for the next time.

40% of the attendees had their first interaction with our community at this hackathon! And guess what, 100% of the attendees who filled out the survey would be joining another hackathon in 6 months:

For the next hackathon, we might want to have a talk about the Firefox Architecture in general to give some context to the different modules. Also for the next hackathon we probably will have a fully working Janitor (meaning not alpha status anymore) which will help even more as well.

Lessions learned

  • Janitor will be great for hackathons (though still Alpha, so keep an eye on it)
  • The mix of talk + then directly start hacking works out
  • The participants are happy if they can create a patch in a few minutes to learn the process (Creating Patch, Bugzilla, Review, etc) and I think they are more motivated for future patches

All in all I think this was a great success. Janitor will make every contributor’s life way easier, keep it going! You can find the full album on Flickr (thanks to Daniele for the great pictures!).

https://michaelkohler.info/2016/firefox-hackathon-zurich-april-2016


The Rust Programming Language Blog: Cargo: predictable dependency management

Четверг, 05 Мая 2016 г. 03:00 + в цитатник

Cargo’s goal is to make modern application package management a core value of the Rust programming language.

In practice, this goal translates to being able to build a new browser engine like Servo out of 247 community-driven libraries—and counting. Servo’s build system is a thin wrapper around Cargo, and after a fresh checkout, you’re only one command away from seeing the whole dependency graph built:

   Compiling num-complex v0.1.32
   Compiling bitflags v0.6.0
   Compiling angle v0.1.0 (https://github.com/emilio/angle?branch=servo#eefe3506)
   Compiling backtrace v0.2.1
   Compiling smallvec v0.1.5
   Compiling browserhtml v0.1.4 (https://github.com/browserhtml/browserhtml?branch=gh-pages#0ca50842)
   Compiling unicase v1.4.0
   Compiling fnv v1.0.2
   Compiling heapsize_plugin v0.1.4
   ...

Why do these granular dependencies matter?

Concretely, they mean that Servo’s URL library (and many components like it) is not a deeply nested part of Servo’s main tree, but rather an external library that anyone in the ecosystem can use. This makes it possible for other Rust libraries, like web frameworks, to easily use a browser-grade URL library, sharing the costs and benefits of maintenance. And it flows both ways: recently, a new Rust-based text editor was announced, and happened to provide a fast line-breaking library. Within days, that library replaced Servo’s old custom linebreaker, decreasing Servo’s maintenance burden and increasing sharing in the Rust ecosystem.

The core concerns of dependency management

To make this all work at the scale of an app like Servo, you need a dependency management approach with good answers to a number of thorny questions:

  1. How easy is it to add an external library, like a new linebreaker, to Servo?

  2. If I build Servo on a different machine, for a different architecture, in CI or for release, am I building from the same source code?

  3. If I build Servo for testing, will its indirect dependencies be compiled with debug symbols? If I build Servo for release, will its indirect dependencies be compiled with maximum optimizations? How can I be sure?

  4. If someone published a new version of one of Servo’s dependencies after I commit to Servo, will my CI environment use the same source code as my machine? My production environment?

  5. If I add a new dependency (or upgrade one), can that break the build? Can it affect unrelated dependencies? Under what conditions?

All of these questions (and many more like them) have one thing in common: predictability. One solution to this problem, common in the systems space, is vendoring dependencies—forking them directly into an application’s repository—and then managing them manually. But this comes at a substantial per-project cost, since there’s more to manage and configure. It also comes at an ecosystem-wide cost, since the work involved cannot easily be shared between libraries; it has to be redone instead for each application that brings a set of libraries together. And making sure you can answer all of the questions above, all of the time, is hard work indeed.

Package managers for higher-level languages have shown that by turning dependency management over to a shared tool, you can have predictability, easy workflows that operate over the entire dependency graph, and increased sharing and robustness across the ecosystem. When we started planning Rust 1.0, we knew we wanted to bring these ideas to a systems setting, and making Cargo a central part of the way people use Rust was a big part of that.

Pillars of Cargo

Cargo is built on three major pillars:

  1. Building, testing, and running projects should be predictable across environments and over time.

  2. To the extent possible, indirect dependencies should be invisible to application authors.

  3. Cargo should provide a shared workflow for the Rust ecosystem that aids the first two goals.

We’ll look at each of these pillars in turn.

Predictability

Cargo’s predictability goals start with a simple guarantee: once a project successfully compiles on one machine, subsequent compiles across machines and environments will use exactly the same source code.

This guarantee is accomplished without incorporating the source code for dependencies directly into a project repository. Instead, Cargo uses several strategies:

  1. The first time a build succeeds, Cargo emits a Cargo.lock file, which contains a manifest of precisely which source code was used in the build. (more on “precise” in a bit)

  2. Cargo manages the entire workflow, from running tests and benchmarks, to building release artifacts, to running executables for debugging. This allows Cargo to ensure that all dependencies (direct and indirect) are downloaded and properly configured for these use-cases without the user having to do anything extra.

  3. Cargo standardizes important environment configuration, like optimization level, static and dynamic linking, and architecture. Combined with the Cargo.lock, this makes the results of building, testing and executing Cargo projects highly predictable.

Predictability By Example

To illustrate these strategies, let’s build an example crate using Cargo. To keep things simple, we’ll create a small datetime crate that exposes date and time functionality.

First, we’ll use cargo new to start us out:

$ cargo new datetime
$ cd datetime
$ ls
Cargo.toml src
$ cat Cargo.toml
[package]
name = "datetime"
version = "0.1.0"
authors = ["Yehuda Katz <wycats@gmail.com>"]

[dependencies]

We don’t want to build the date or time functionality from scratch, so let’s edit the Cargo.toml and add the time crate from crates.io:

  [package]
  name = "datetime"
  version = "0.1.0"
  authors = ["Yehuda Katz <wycats@gmail.com>"]

  [dependencies]
+ time = "0.1.35"

Now that we’ve added the time crate, let’s see what happens if we ask Cargo to build our package:

$ cargo build
   Compiling winapi v0.2.6
   Compiling libc v0.2.10
   Compiling winapi-build v0.1.1
   Compiling kernel32-sys v0.2.2
   Compiling time v0.1.35
   Compiling datetime v0.1.0 (file:///Users/ykatz/Code/datetime)

Whoa! That’s a lot of crates. The biggest part of Cargo’s job is to provide enough predictability to allow functionality like the time crate to be broken up into smaller crates that do one thing and do it well.

Now that we successfully built our crate, what happens if we try to build it again?

$ cargo build

Nothing happened at all. Why’s that? We can always ask Cargo to give us more information through the --verbose flag, so let’s do that:

$ cargo build --verbose
       Fresh libc v0.2.10
       Fresh winapi v0.2.6
       Fresh winapi-build v0.1.1
       Fresh kernel32-sys v0.2.2
       Fresh time v0.1.35
       Fresh datetime v0.1.0 (file:///Users/ykatz/Code/datetime)

Cargo isn’t bothering to recompile packages that it knows are “fresh”, like make, but without having to write the Makefile.

But how does Cargo know that everything is fresh? When Cargo builds a crate, it emits a file called Cargo.lock that contains the precise versions of all of its resolved dependencies:

[root]
name = "datetime"
version = "0.1.0"
dependencies = [
 "libc 0.2.10 (registry+https://github.com/rust-lang/crates.io-index)",
 "time 0.1.35 (registry+https://github.com/rust-lang/crates.io-index)",
]

[[package]]
name = "kernel32-sys"
version = "0.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
 "winapi 0.2.6 (registry+https://github.com/rust-lang/crates.io-index)",
 "winapi-build 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
]

...

The Cargo.lock contains a serialized version of the entire resolved dependency graph, including precise versions of all of the source code included in the build. In the case of a package from crates.io, Cargo stores the name and version of the dependency. This is enough information to uniquely identify source code from crates.io, because the registry is append only (no changes to already-published packages are allowed).

In addition, the metadata for the registry is stored in a separate git repository, and includes checksum for the relevant package. Before Cargo ever unpacks a crate it downloads, it first validates the checksum.

Collaborating

Now for the real test. Let’s push our code up to GitHub and develop it on a different machine. Ideally, we would like to be able to pick up right where we left off, with the exact same source code for all of our dependencies.

To do this, we check in our Cargo.lock and clone the repository on our new machine. Then, we run cargo build again.

$ cargo build
   Compiling libc v0.2.10
   Compiling winapi v0.2.6
   Compiling winapi-build v0.1.1
   Compiling kernel32-sys v0.2.2
   Compiling time v0.1.35
   Compiling datetime v0.1.0 (file:///Users/ykatz/Code/datetime)

As expected, because we checked in our Cargo.lock we get exactly the same versions of all dependencies as before. And if we wanted to start collaborating with other developers on GitHub (or with other members of our team at work), we would continue to get the same level of predictability.

Common conventions: examples, tests, and docs

Now that we’ve written our snazzy new datetime crate, we’d love to write an example to show other developers how it should be used. We create a new file called examples/date.rs that looks like this:

extern crate datetime;

fn main() {
    println!("{}", datetime::DateTime::now());
}

To run the example, we ask Cargo to build and run it:

$ cargo run --example date
   Compiling datetime v0.1.0 (file:///Users/ykatz/Code/datetime)
     Running `target/debug/examples/date`
26 Apr 2016 :: 05:03:38

Because we put our code in the conventional location for examples, Cargo knew how to do the right thing, no sweat.

In addition, once you start writing a few tests, cargo test will automatically build your examples as well, which prevents them from getting out of sync with your code, and ensures they continue to compile as long as your tests are passing.

Similarly, the cargo doc command will automatically compile not just your code, but that of your dependencies as well. The upshot is that the API docs it automatically produces include the crates you depend on, so if your APIs mention types from those crates, your clients can follow those links.

These are just a few examples of a general point: Cargo defines a common set of conventions and workflows that operate precisely the same way across the entire Rust ecosystem.

Updating

All of this means that your application won’t change if you don’t make any changes to your dependencies, but what happens when you need to change them?

Cargo adds another layer of protection with conservative updates. This means that if you modify your Cargo.toml, Cargo attempts to minimize the changes made to the Cargo.lock. The intuition of conservative updates is: if the change you made was unrelated to another dependency, it shouldn’t change.

Let’s say that after developing the library for a little while, we decide that we want to add support for time zones. First, let’s add in the tz dependency to our package:

  [package]
  name = "datetime"
  version = "0.1.0"
  authors = ["Yehuda Katz <wycats@gmail.com>"]

  [dependencies]
  time = "0.1.35"
+ tz = "0.2.1"

After using the crate in our library, let’s run cargo build again:

$ cargo build
    Updating registry `https://github.com/rust-lang/crates.io-index`
 Downloading tz v0.2.1
 Downloading byteorder v0.5.1
   Compiling byteorder v0.5.1
   Compiling tz v0.2.1
   Compiling datetime v0.1.0 (file:///Users/ykatz/Code/datetime)

Cargo downloaded tz (and its dependency byteorder) and compiled them, but it didn’t touch the packages we were already using (kernel32-sys, libc, time, winapi and winapi-build). Even if one of those package authors published an update in the meantime, you can be sure that adding new crates won’t mess with unrelated ones.

Conservative updates attempt to significantly reduce unexpected changes to your source code. It stands in stark contrast to ‘rebuild the world’, which allows a small change to dependencies to rebuild the entire graph, wreaking havoc in its wake.

As a rule, Cargo attempts to minimize the effects of intentional changes to direct dependencies.

Indirect Dependencies “Just Work”

One of the most basic goals of an application package manager is separating direct dependencies, which are required by the application, and indirect dependencies, which those dependencies need in order to work.

As we’ve seen in the datetime crate we built, we only needed to specify dependencies on time and tz, and Cargo automatically created the entire graph of dependencies needed to make that work. It also serialized that graph for future predictability.

Since Cargo manages your dependencies for you, it can also make sure that it compiles all of your dependencies (whether you knew about them directly or not) appropriately for the task at hand.

Testing, Benchmarking, Releasing, Oh My

Historically, people have shied away from the kinds of granular dependencies we’ve seen here because of the configuration needed for each new dependency.

For example, when running tests or type-checking your code, you would like to compile the code as fast as possible to keep the feedback loop fast. On the other hand, when benchmarking or releasing your code, you are willing to spend plenty of time waiting for the compiler to optimize your code if it produces a fast binary.

It’s important to compile not only your own code or your direct dependencies, but all indirect dependencies with the same configuration.

Cargo manages that process for you automatically. Let’s add a benchmark to our code:

#[bench]
fn bench_date(b: &mut Bencher) {
    b.iter(|| DateTime::now());
}

If we then run cargo bench:

$ cargo bench
   Compiling winapi v0.2.6
   Compiling libc v0.2.10
   Compiling byteorder v0.5.1
   Compiling winapi-build v0.1.1
   Compiling kernel32-sys v0.2.2
   Compiling tz v0.2.1
   Compiling time v0.1.35
   Compiling datetime v0.1.0 (file:///Users/ykatz/Code/datetime)
     Running target/release/datetime-2602656fcee02e68

running 1 test
test bench_date ... bench:         486 ns/iter (+/- 56)

Notice that we’re re-compiling all of our dependencies. This is because cargo bench defaults to release mode, which uses maximum optimizations. cargo build --release similarly builds in optimized mode by default.

As an aside, the default behavior of each command is configurable through profiles in the Cargo.toml. This allows you to configure things like the optimization level, whether to include debug symbols and more. Rather than forcing you to use a custom workflow if something doesn’t precisely meet your needs, the profiles feature allows you to customize the existing workflows and stay within Cargo’s flows.

Platforms and Architectures

Similarly, applications are often built for different architectures, operating systems, or even operating system version. They can be compiled for maximum portability or to make maximum use of available platform features.

Libraries can be compiled as static libraries or dynamic libraries. And even static libraries might want to do some dynamic linking (for example, against the system version of openssl).

By standardizing what it means to build and configure a package, Cargo can apply all of these configuration choices to your direct dependencies and indirect dependencies.

Shared Dependencies

So far, we’ve looked at packages and their dependencies. But what if two packages that your application depends on share a third dependency?

For example, let’s say that I decide to add the nix crate to my datetime library for Unix-specific functionality.

  [package]
  name = "datetime"
  version = "0.1.0"
  authors = ["Yehuda Katz <wycats@gmail.com>"]

  [dependencies]
  time = "0.1.35"
  tz = "0.2.1"
+ nix = "0.5.0"

As before, when I run cargo build, Cargo conservatively adds nix and its dependencies:

$ cargo build
    Updating registry `https://github.com/rust-lang/crates.io-index`
 Downloading nix v0.5.0
 Downloading bitflags v0.4.0
   Compiling bitflags v0.4.0
   Compiling nix v0.5.0
   Compiling datetime v0.1.0 (file:///Users/ykatz/Code/datetime)

But if we look a little closer, we’ll notice that nix has a dependency on bitflags and libc. It now shares the dependency on libc with the date package.

If my datetime crate gets libc types from time and hands them off to nix, they better be the same libc or my program won’t compile (and we wouldn’t want it to!)

Today, Cargo will automatically share dependencies between crates if they depend on the same major version (or minor version before 1.0), since Rust uses semantic versioning. This means that if nix and datetime both depend on some version of libc 0.2.x, they will get the same version. In this case, they do, and the program compiles.

While this policy works well (and in fact is the same policy that system package managers use), it doesn’t always do exactly what people expect, especially when it comes to coordinating a major version bump across the ecosystem. (In many cases, it would be preferable for Cargo to hard-error than assume that a dependency on 0.2.x is simply unrelated to another dependency on 0.3.x.)

This problem is especially pernicious when multiple major versions of the same package expose global symbols (using #[no_mangle] for example, or by including other statically linked C libraries).

We have some thoughts on ways to improve Cargo to handle these cases better, including the ability for a package to more explicitly express when a dependency is used purely internally and is not shared through its public interface. Those packages could be more readily duplicated if needed, while dependencies that are used in a package’s public interface must not be.

You should expect to see more on this topic in the months ahead.

Workflow

As we’ve seen, Cargo is not just a dependency manager, but Rust’s primary workflow tool.

This allows Rust to have a rich ecosystem of interconnected dependencies, eliminating the need for applications to manually manage large dependency graphs. Applications can benefit from a vibrant ecosystem of small packages that do one thing and do it well, and let Cargo handle the heavy lifting of keeping everything up to date and compiling correctly.

Even a small crate like the datetime example we built has a few dependencies on small, targeted crates, and each of those crates has some dependencies of its own.

By defining shared, well-known workflows, like “build”, “test”, “bench”, “run”, and “doc”, Cargo provides Rust programmers with a way to think about what they’re trying to accomplish at a high level, and not have to worry about what each of those workflows mean for indirect dependencies.

This allows us to get closer to the holy grail of making those indirect dependency graphs “invisible”, empowering individuals to do more on their hobby projects, small teams to do more on their products, and large teams to have a high degree of confidence in the output of their work.

With a workflow tool that provides predictability, even in the face of many indirect dependencies, we can all build higher together.

http://blog.rust-lang.org/2016/05/05/cargo-pillars.html


Mozilla Addons Blog: How an Add-on Played Hero During an Industrial Dilemma

Четверг, 05 Мая 2016 г. 01:46 + в цитатник

noitA few months ago Noit Saab’s boss at a nanotech firm came to him with a desperate situation. They had just discovered nearly 900 industrial containers held corrupted semiconductor wafers.

This was problematic for a number of reasons. These containers were scattered across various stages of production, and Noit had to figure out precisely where each container was at in the process. If not, certain departments would be wrongly penalized for this very expensive mishap.

It was as much an accounting mess as it was a product catastrophe. To top it off, Noit had three days to sort it all out. In 72 hours the fiscal quarter would end, and well, you know how finance departments and quarterly books go.

Fortunately for Noit—and probably a lot of very nervous production managers—he used a nifty little add-on called iMacros to help with all his web-based automation and sorting tasks. “Without iMacros this would have been impossible,” says Noit. “With the automation, I ran it overnight and the next morning it was all done.”

Nice work, Noit and iMacros! The day—and perhaps a few jobs—were saved.

“I use add-ons daily for everything I do,” says Noit. “I couldn’t live without them.” In addition to authoring a few add-ons himself, like NativeShot (screenshot add-on with an intriguing UI twist), MouseControl (really nice suite of mouse gestures), MailtoWebmails (tool for customizing the default actions of a “mailto:” link), and Profilist (a way to manage multiple profiles that use the same computer, though still in beta), here are some of his favorites…

“I use Telegram for all my chatting,” says Noit. “I’m not a big mobile fan so it’s great to see a desktop service for this.”

Media Keys, because “I always have music playing from a YouTube list, and sometimes I need to pause it, so rather than searching for the right tab, I use a global hotkey.”

“And of course, AdBlock Plus,” concludes Noit.

If you, dear friends, use add-ons in interesting ways and want to share your experience, please email us at editor@mozilla.com with “my story” in the subject line.

https://blog.mozilla.org/addons/2016/05/04/how-an-add-on-played-hero-during-an-industrial-dilemma/



Поиск сообщений в rss_planet_mozilla
Страницы: 472 ... 266 265 [264] 263 262 ..
.. 1 Календарь