-Поиск по дневнику

Поиск сообщений в rss_planet_mozilla

 -Подписка по e-mail

 

 -Постоянные читатели

 -Статистика

Статистика LiveInternet.ru: показано количество хитов и посетителей
Создан: 19.06.2007
Записей:
Комментариев:
Написано: 7

Planet Mozilla





Planet Mozilla - https://planet.mozilla.org/


Добавить любой RSS - источник (включая журнал LiveJournal) в свою ленту друзей вы можете на странице синдикации.

Исходная информация - http://planet.mozilla.org/.
Данный дневник сформирован из открытого RSS-источника по адресу http://planet.mozilla.org/rss20.xml, и дополняется в соответствии с дополнением данного источника. Он может не соответствовать содержимому оригинальной страницы. Трансляция создана автоматически по запросу читателей этой RSS ленты.
По всем вопросам о работе данного сервиса обращаться со страницы контактной информации.

[Обновить трансляцию]

Air Mozilla: Mozilla Weekly Project Meeting, 04 Jan 2016

Понедельник, 04 Января 2016 г. 22:00 + в цитатник

Mark Surman: How I want to show up this year

Понедельник, 04 Января 2016 г. 21:25 + в цитатник

As we begin 2016, I have tremendous hope. I feel a clarity and sense of purpose — both in my work and my life — that I haven’t felt for years.

Partly, this feeling flows from the plans we’ve made at Mozilla and the dreams I’ve been dreaming with those I love the most. I invested a great deal in these plans and dreams in 2015. That investment is starting to bear fruit.

This feeling also flows from a challenge I’ve given myself: to fully live my values every day. I’ve gotten rusty at this in the last few years. There are three things specific things I want to do more in 2016:

1. Be present. Listen more, say less. Be vulnerable. Create more space for others.

2. Focus on gratitude and abundance. Build ambitiously and joyfully from what I / we have.

3. Love generously. In all corners of my life. Remember to love myself.

I started to push myself in these areas late last year. As I did, things simply went better. I was happier. People around me were happier. Getting things done started to feel easier and more graceful, even as I/we worked hard and dealt with some painful topics.

Which all adds up to something obvious: how the plans and the dreams work out has alot to do with how I (and all of us) show up everyday.

So, that’s something I want to work on in 2016. I’m hoping the list above will help me stay accountable and on course. And, if you’re someone close to me, that you will, too.

It’s going to be a good year.

The post How I want to show up this year appeared first on Mark Surman.

http://marksurman.commons.ca/2016/01/04/how-i-want-to-show-up-this-year/


Fr'ed'eric Harper: Looking for a new challenge

Понедельник, 04 Января 2016 г. 20:43 + в цитатник
World War II poster with get excited and make things text

Creative Commons: https://flic.kr/p/6bLxDe

It’s the beginning of a new year, which means a blank slate to move forward, improve yourself, and enhance your life. Personally, I’m starting the year looking for a new job!

Over the last six months, I had the pleasure of working with the talented folks at IMMUNIO. However, the company is prioritizing other marketing activities than evangelism (the CEO can give you more information). It became apparent that this new direction won’t give me the possibility to use my passion and expertise to produce the impact I would like and the results they need.

What’s next

Of course, I’ve been a Technical/Developer Evangelist/Advisor/Relations (whatever you call it) for five years now. I’ve built my experience at companies like Microsoft and Mozilla, but I’m open to discussing any other type of role, technical or not, where my experience can help the business to achieve their goals. My only criteria? A role that will get me excited and where I’ll make things happen: without creativity, passion, and any ways to challenge myself, it won’t be a good fit, for both of us. On the compensation side, let’s be honest, I also have a lifestyle I would like to keep.

I’m fine with travelling extensively and remote working as it’s what I’ve done extensively for the last couple of years, but because of health issues in my family, I cannot move from Montreal (Canada). Note that I don’t want to go back as a full-time developer.

Some of my experience includes:

Feel free to read other articles on this blog and give a closer look to my primary social media accounts (Twitter, Facebook and LinkedIn). You’ll find a passionate, honest and bold person.

Contact me

I have the firm intention to find a company where I’ll be able to grow in the next couple of years. If you think we can work together, please send me an email with some information about the role.

http://feedproxy.google.com/~r/outofcomfortzonenet/~3/xzzR7EqUdrQ/


Brian King: Firefox OS Participation in 2015

Понедельник, 04 Января 2016 г. 13:36 + в цитатник

TL;DR

Though a great deal was achieved and accomplished through the Participation Team’s collaboration with Firefox OS, a major realization was that participation can only be as successful as a team’s readiness to incorporate participation into their workflow.

Problem Statement

Firefox OS was not as open a project as many others at Mozilla, and in terms of engineering was not structured well enough to scale participation. Firefox OS was identified as an area where Participation could make a major impact and a small group from the Participation Team was directed to work closely with them to achieve their goals. After the Firefox OS focus shift in 2015 from being partner driven development to user and developer driven, participation was identified as key to success.

The Approach

The Participation Team did an audit of the current state of participation. We interviewed cross-functional team members and did research into current contributions. Without any reliable data, our best guess was that there were less than 200 contributors, with technical contributions being much less than that. Launch teams around the world had contributed in non-technical capacities since 2013, and a new community was growing in Africa around the launch of the Klif device.

Based on a number of contribution areas (coding, ports, foxfooding, …), we created a matrix where we applied different weighted criteria. Criteria such as impact on organizational goals, for example, were given a greater weighting. Our intent was to create a “participation sweet spot” where we could narrow in on the areas of participation that could provide the most value to volunteers and the organization.

Participation Criteria

Participation Criteria

The result was to focus on five areas for technical contributions to Firefox OS:

  • Foxfooding
  • Porting
  • Gaia Development
  • Add-ons
  • B2GDroid

In parallel, we built out the Firefox OS Participation team, a cross-functional team from engineering, project management, marketing, research and more areas. We had a cadence of weekly meetings, and communicated also on IRC and on an email list. A crucial meeting was a planning meeting in Paris in September, 2015, where we finalized details of the plan to start executing on.

Team Meeting

Team Meeting

The Firefox OS Participation Hub

As planning progressed, it became clear that a focal point for all activities would be the Firefox OS Participation Hub, a website for technical contributors to gather. The purpose was two-fold:

  1. To showcase and track technical contributions to Firefox OS. We would list ways to get involved, in multiple areas, but focusing on the “sweet spot” areas we had identified. The idea was not to duplicate existing content, but to gather it all in one place.
  2. To facilitate foxfooding, a place where users could go to get builds for their device and register their device. The more people using and testing Firefox OS, the better the product could become.
Firefox OS Participation Hub

Firefox OS Participation Hub

The Hub came to life in early November.

Challenges

There were a number of challenges that we faced along the way.

  • Participation requires a large cultural shift, and cannot happen overnight. Mozilla is an open and open source organization, but because of market pressures and other factors Firefox OS was not a participatory project.
  • One learning was that we should have embedded the program deeper in the Firefox OS organization earlier, doing more awareness building. This would have mitigated an issue where beyond the core Firefox OS Participation team, it was hard getting the attention of other Firefox OS team members.
  • Timing is always hard to get right, because there will always be individual and team priorities and deliverables not directly related that will take precedence. During the program rollout, we were made acutely aware of this coming up the release of Firefox OS 2.5.

For participation to really take hold and succeed, broader organizational and structural changes are needed. Making participation part of deliverables will go some way to achieving this.

Achievements

Despite the challenges mentioned above, the Firefox OS Participation team managed to accomplish a great deal in less than six months:

  • we brought all key participation stakeholders on the Firefox OS team together and drafted a unified vision and strategy for participation
  • we launched the Firefox OS Participation Hub, the first of its kind to learn about contribution opportunities and download latest Firefox OS builds
  • we formed a “Porting Working Group” focused on porting Firefox OS to key devices identified, and enabling volunteers to port to more devices
  • we distributed more than 500 foxfooding devices in more than 40 countries
  • the “Mika Project” explored a new to engage with, recognize and retain foxfooders through gamification – The “Save Mika!” challenge will be rolling out 2016Q1

Moving Forward

Indeed, participation will be a major focus for Connected Devices in 2016. Participation can and will have a big impact on helping teams solve problems, do work in areas they need help with and achieve new levels of success. To be sure, before a major participation initiative can be successful, the team needs to be set-up for success 4 major components:

  1. Managerial buy-in. “It’s important to have permission to have participation in your work”. Going one step further, it needs to be part of day-to-day deliverables.
  2. Are there tools open? Do you have a process for bug triaging and mentorship? Is there good and easily discoverable documentation?
  3. Do they have an onboarding system in place for new team members (staff and volunteers)?
  4. Measurement. How can we systematically and reliably measure contributions, over time, to detect trends to act upon to maintain a healthy project? Some measurements are in place, but we need more.

We put Participation on the map for Firefox OS and as announced at Mozlando, the Connected Devices team will be doubling down its efforts to design for participation and work in the open.

We are excited about the challenges and opportunities ahead of us, and we can’t wait to provide regular updates on our progress with everyone. So as we enter the new year, please make sure to follow us on Discourse, Twitter, and Facebook for latest news and updates.

2016, here we come!

http://brian.kingsonline.net/talk/2016/01/firefox-os-participation-in-2015/


Robert O'Callahan: innerText: Cleaning A Dark Corner Of the Web

Понедельник, 04 Января 2016 г. 13:15 + в цитатник

One of the things I did last year was implement innerText in Gecko and write a specification for it. This is more or less a textbook case of cleaning a dark corner of the Web.

innerText was implemented in IE 5.5 (or maybe earlier?), apparently with the goal of approximating the getting and setting of the "rendered text contents" of a DOM node. Microsoft being Microsoft at the time, there was no attempt to add it to any Web specification. Some time later, Webkit implemented their own version of it --- a very different and quite incompatible implementation. Naturally, Blink inherited the Webkit implementation. Over the years implementations evolved independently.

In Gecko we didn't bother to implement it. It was seldom used, and you can quite easily polyfill a decent implementation, which has the advantage of working the same across all browsers. (Many users can just use textContent instead.) It's a feature the Web just doesn't need, even if it worked interoperably and had a spec.

Sadly, this year it became clear we have to implement it. People have been using it in mobile sites (Webkit monoculture) and it even started showing up on the odd desktop site where people didn't test with Firefox. I ran into this on xkcd.com and then it was clear I had to fix it!

I could have done what Webkit/Blink (and I suspect IE) did and hooked it up to our existing plaintext serializer (which implements clipboard plaintext copy), adding a third totally incompatible implementation ... but that's not Mozilla. So I wrote a bunch of testcases (partly inspired by kangax's) and studied some other resources, and created a spec that felt to me like a good tradeoff between simplicity and Web compatibility --- and implemented it in Gecko. As kangax notes, it's highly Blink-compatible but avoids some obvious Blink bugs.

My key insight while writing the getter's spec was that we should reuse the CSS spec by reference as much as possible instead of specifying rules that would duplicate CSS logic. For example, my spec (and implementation) don't mention CSS white-space property values explicitly, but instead incorporates CSS white-space processing by reference, which means all the values (and any new ones) just work.

So far so good --- our implementation is riding the trains in Firefox 45 --- but this is not the end of the story. The next step is for other browsers to change their behavior to converge on this spec --- or if they can't, to explain why so we can fix the spec so they can. To be honest, I'm not very optimistic about that since I've received minimal feedback from Apple/Google/Microsoft so far (despite begging). But we've done what we can at Mozilla to fix this wart on the Web, and we've probably done enough to fix innerText-specific Web compat problems in Firefox for the forseeable future.

http://robert.ocallahan.org/2016/01/innertext.html


Karl Dubost: Webkit! RESOLVED FIXED

Понедельник, 04 Января 2016 г. 11:00 + в цитатник

I mentioned it in my worklog last week. On 2016-01-02 09:39:38 Japan Standard Time, Mozilla closed a very important issue enabling a new feature in Firefox: Bug 1213126 - Enable layout.css.prefixes.webkit by default . Thanks to many people for this useful work. First, because an image is sometimes worth 1000 words, let's look at an example. The following image is the rendering in Gecko (Firefox Nightly 46.0a1 (2016-01-03) on Android) of the mobile site mobage by DeNA.

  • On the left side layout.css.prefixes.webkit; true (default now in Firefox Nightly)
  • On the right side layout.css.prefixes.webkit; false (still the case as of now in Firefox Developer Edition)

Mobage screenshot with webkit activated

Below, I will explain the origin, the how and the why.

We have been dealing with issues about Web Compatibility on mobile devices for quite a long time. The current Mozilla Web Compatibility team is people from Opera Software. We were working on very similar issues. Both Microsoft, Mozilla and Opera have had hard time existing on the mobile Web because of Web sites developed with WebKit prefixes only.

Old Folk Tales from East Asia

In March 2014, Hallvord and I went to the Mozilla Chinese office in Beijing for working with the team on improving Web Compatibility in China. Many bug reports had been opened about mobile sites failing in China in Firefox OS and Firefox Android. Sometimes, we had to lie about the User-Agent string on the client side. Most of the time, it was not enough. Firefox on smartphones (Android, Firefox OS) was still receiving broken sites made for WebKit only (Chrome, Safari). The Mozilla Beijing team was spending a lot of time to retrofit Firefox Android into a product which was compatible with the Chinese Web. It was unfortunate. Each release required a long and strenuous effort. It could have benefited other markets with similar issues.

On December 2014 in Portland (Oregon, USA) during the Mozilla work week, we (Mozilla core platform engineers, Web Compatibility team and some members of the Beijing team) had a meeting to discuss the type of issues users had to deal with on their browser. By the end of the meetiing, we decided to start identify the painful points (the most common type of issues) and how we could try to fix them. Hallvord started to work on a service for rewriting CSS WebKit prefixes on the fly in the browser. Later on, this led to the creation of CSS Fix me (Just released in December 2015).

We also started to survey the Japanese Mobile Web. It gave us another data point about the broken Web. In the top 100 Japanese sites (then top 1000 sites in Japan), we identified that 20% of them had rendering issues due to non-standard coding such as WebKit prefixes and other delicacies related to DOM APIs.

Fixing the Mobile Web

On February 2015, we had a Web Compatibility summit in Mountain View (California, USA). We shared our pain points with Microsoft, Google, and others. Apple didn't show up.

Through surveys and analysis, the Web Compatibility team assessed a couple of priorities, the ones hurting usability the most. Quickly, we noticed that

  • flexbox,
  • gradients,
  • transitions,
  • transforms,
  • background-position

were the major issues for CSS.

  • window.orientation
  • WebKitCSSMatrix

for DOM APIs.

Microsoft shared with us what they had to implement to make Edge compatible with the Web.

On June 2015, during the Whistler (British Columbia, Canada) Mozilla work week, we decided to move forward and be more effective than just the unprefixing service. Some core platform engineers spent time on implementing natively what was needed for getting a performant browser compatible with the Web. This includes the excellent work of Daniel Holbert (mainly flexbox) and Robert O'Callahan (innerText). I should probably list exactly who did what in details (for another day).

Leading the effort for a while, Mike Taylor opened a couple of very important issues on Mozilla bugzilla, referenced at Bug 1170774 - (meta) Implement some non-standard APIs for web compatibility and the associated wiki. He also started the Compatibility transitory specification. The role of this specification is not to stand on its own, but to give a support for other specifications to cherry pick what's necessary to be compatible with the current Web.

There is still a lot of work to be done, but the closing of Bug 1213126 - Enable layout.css.prefixes.webkit by default is a very important step. Thanks Daniel Holbert for this Christmas gift. This is NOT a full implementation of all WebKit prefixes, just the ones which are currently breaking the Web.

Why Do We Do That?

The usual and valid questions emerge:

  • Don't we reward lazy developers?
  • Does this destroy the standard process?
  • Do we further entrench the (current) dominant position of WebKit?
  • Do we risk to follow the same path than Opera abandonning Presto for Blink?
  • Do we bloat the rendering engine code with redundancies?

All these questions have very rational answers, righteous ones which I deeply understand at a personal level. Web developers have bosses and their own constraints. I have seen many times, people being very much oriented toward Web standards and still having to make choices because they were using a framework or a library not fully compliant or with its own bugs.

Then there is the daily reality of the users. Browsers are extremely confusing for people. They don't know how it's working, they don't know (and it's not sure there are any good reasons that they should all know) why a rendering engine fails to render a site not being properly coded.

In the end, the users deserve to have the right experience. The same way we recover from Web sites with broken HTML, we need to make the efforts to help users have a usable Web experience.

The battle is not over. Web sites still need to be fixed.

PS: Typos fixes and grammar tweaks are welcome.

Otsukare!

http://www.otsukare.info/2016/01/04/webkit-resolved-fixed


This Week In Rust: This Week in Rust 112

Понедельник, 04 Января 2016 г. 08:00 + в цитатник

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

This week's edition was edited by: nasa42, brson, and llogiq.

Updates from Rust Community

News & Blog Posts

Notable New Crates & Project Updates

  • rusty-machine. Machine Learning library for Rust.
  • Shiplift. Rust interface for maneuvering docker containers.
  • Hyperlocal. Hyper bindings for local unix domain sockets.

Updates from Rust Core

63 pull requests were merged in the last week.

Notable changes

New Contributors

  • Aaron Keen
  • Chris Buchholz
  • Christoffer Buchholz
  • Daniel Collin
  • defyrlt
  • Denis Kolodin
  • est31
  • James Mantooth
  • Luke Jones
  • Natha

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week!

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

fn work(on: RustProject) -> Money

Tweet us at @ThisWeekInRust to get your job offers listed here!

Crate of the Week

This week's Crate of the Week is rustfmt, because it's nice to Rust with style.

Submit your suggestions for next week!

Quote of the Week

I think the "standard library" is really (forgive me) a brand.

jimb on rust-internals

Thanks to llogiq for the belated suggestion.

Submit your quotes for next week!

http://this-week-in-rust.org/blog/2016/01/04/this-week-in-rust-112/


Nikki Bee: Okay, But What Does Your Work Actually Mean, Nikki? Part 1: Starting On Servo

Понедельник, 04 Января 2016 г. 04:24 + в цитатник

In my first post, I tried talking about what I’m working on, but ended up talking about everything that lead to my internship, starting about a year ago! That’s pretty cool, especially since it was the only way I could think of talking about it, but it was kind of misleading. I had a couple short paragraphs about Servo and Fetch, respectively, but I didn’t feel up to expanding on either since my post was already so long. So, I’m going to be trying that now! Today I’ll start writing about what I’ve done for Servo.

Servo At A Broad Level

There’s so much to Servo, I don’t assume I could adequately talk about it at a broad level beyond quoting what I said two weeks ago: “Servo is a web rendering engine, which means it handles loading web pages and other specifications for using the web. To contrast Servo to a full-fledged browser: a browser could use Servo for its web rendering, and then put a user interface on top of it, things like bookmarks, tabs, a url bar.”

What I do feel more confident talking about is the parts of Servo I’ve worked with. That’s in three areas: my first contribution to Servo as part of my Outreachy application (and its follow-up), working on the Fetch implementation in Servo, and changes I’ve made to parts of Servo that Fetch relies on. Today I’m going to talk about the first part, and the other two together in the next post.

My First Contribution

Like I briefly mentioned in my first post, I first got acquainted with Servo by adding more functionality for existing code dealing with sending data over a WebSocket, which is a well defined web communication protocol. My task was to understand a small part of the specification (“If the argument is a Blob object”), and the existing implementation for “If the argument is a string”. A Blob can represent things like images or audio, but the definition is pretty open-ended, although most of what I had to do with it was pass it on to other functions that would handle it for me.

As with everything I’ve done in Servo, what first seemed like a simple goal - make a new function similar to the existing code that just handles a different kind of data - ended up becoming pretty involved, and needing changes in other areas that I didn’t have the time to fully comprehend. A big part of this is that working on Servo is my first time using the programming language Rust, so I was learning both Rust and the WebSocket specification at the same time! Thankfully, I had a lot of help from other members of the Servo team on anything I didn’t understand, which was a lot.

Outside of slowly learning how to work with Rust, I spent most of my time on this talking about exactly how I should be implementing anything. I like to make sure I do the right thing, and the best way for me to do that is to understand the situation as well as I can, then present that and any questions I have to other people.

The Buffer Amount Dilemma

One instance of this was that the specification for WebSocket didn’t say what to do if the size of a Blob being sent is larger than the maximum size a WebSocket buffer (where data is temporarily stored while being passed on) can hold. A Blob can hold up to 2^64 bits of data, whereas the WebSocket buffer could only hold 2^32 bits of data. The “obvious” solution would be to handle it like I would if the buffer overflowed by being fed many Blobs smaller than 2^32 bits- but that seemed to me a different situation, and so I asked around for advice on what to do.

The consensus was that for now I should 1) truncate the data to 2^32 bits, and then raise an error in response to the buffer overflow 2) open an issue at WHATWG (the authority behind the current HTML Standard) about the seeming gap in the specification about this, to find out what part of the specification, if any, needs updating.

And so, in approaching everything I was unsure about in such a way, I slowly made progress on ironing out a decent function that covered previous and new needs, follows the specification as much as possible, and has reasonable solutions to areas either not covered by the specification or are just unique challenges to programming in Rust.

Return Of The Buffers

Since all of this was meant to be part of my application to Outreachy, after it was accepted for Servo, I stopped thinking about it while waiting to hear back on my acceptance status. Between then and my being chosen as the intern for Servo, that issue I had opened at WHATWG had been discussed and resolved, with the decision being to let the WebSocket buffer hold up to 2^64 bits of data, meaning that there would be no need to intentionally lose any amount of data sent at one time, and preventing (incredibly) large files from instantly raising errors.

This change also meant that the code I had wrote earlier would need to be updated, since it was now incorrect! It made the most sense for me to do that, since it was my code, and my question that had changed the specification. I was grateful to be able to rectify a situation that had before seemed to have no good answer, especially since it also made my code simpler.

So, starting and ending over a period of a couple months, that’s the story of the first work I ever did on Servo!

End of second post.

http://nikkisquared.github.io/2016/01/04/what-does-your-work-mean-part-1.html


David Ascher: Appreciation for the builders of the web

Понедельник, 04 Января 2016 г. 01:14 + в цитатник
Valerie's Happy Willy Wednesday!Happy Willy Wednesday! by Valerie

I just spent a couple of days playing around with ideas, words to describe those ideas, and code to make those ideas come to life. It had been quite a few months since I’d spent the time to sharpen my tools, and while the challenges to the web are greater than ever, it’s also a more impressive toolkit than ever.

I’d like to call out a few heroes whose generous work as open source and easily accessible services have made the last couple of days particularly fun:

  • The folks at Heroku
  • The amazing perseverance of @automattic and the WordPress team, whose recent moves to both redo the front-end and open-source the .com site are one of a long string of bold and ambitious moves.
  • The energy and diligence of the Ghost team
  • The JS module hackers who make github and npm useful as opposed to just necessary
  • Brock and the rest of the Surge.sh team for a product that “just works”
  • The Firebase team, for not shutting down one of the tools I think has managed to bridge the ergonomics of a Heroku with the developer power that AWS-style services provide
  • AWS, for paving the way (and because even S3 is still remarkable)
  • The React team at Facebook & beyond, who are boldly moving the client side forward
  • The Material Design team at Google, for taking design seriously and then implementing it in an accessible way.
  • The LetsEncrypt team, one of the projects I’m proud to say Mozilla had a big hand in.

Thank you all.

http://ascher.ca/2016/01/03/appreciation-for-the-builders-of-the-web/


Emma Irwin: My 2016 #FOSS Project — Secret Shopper

Воскресенье, 03 Января 2016 г. 08:52 + в цитатник

In the first 6 months of 2016, I have a personal goal of contributing code to 10 Open Source Projects. I’m looking for projects with potential to teach us how to design for the success of contributors, like myself, who bring intermediate to advanced skill-sets and experience.  

I’ll be contributing code under another name, ala Secret Shopper, and sharing what I learn in a July post.

I am looking for nominations for Open Source projects that:

  • Are available on the web using web technologies.
  • Have a released version of their software
  • Have an existing community, or set of core contributors.
  • Want to improve their contributor design for intermediate and experienced developers.
  • Feel they have something they can teach others about community design.
  • Invested in increasing diversity of their contributor base.
  • Bonus: ‘Open’ Education focus.

What projects will get:

  • One or more code contributions from me –  I can’t promise they’ll be valuable, but I’ll do my best;)
  • Review of the experience (which is the actual contribution)
  • Inclusion in a blog post with my findings(in July)

What I hope for myself:

  • Emerge with a set of best practices for projects who want to help intermediate and experienced developers successfully join their project as contributors.
  • A User story of an experienced developer attempting to join a new community, through a first contribution.
  • New and fresh understanding for the experience of stepping into a project as a new code contributor.

If you would like to nominate a project – please do so in comments.

If you would like to nominate a project – please do so in comments on Medium.

Excited to play in FOSS communities this year!


We Can Code It — CC BY-NC-ND 2.0 by Charis Tsevis

FacebookTwitterGoogle+Share

http://tiptoes.ca/my-2016-foss-project%e2%80%8a-%e2%80%8asecret-shopper/


Daniel Pocock: The great life of Ian Murdock and police brutality in context

Суббота, 02 Января 2016 г. 23:45 + в цитатник

Tributes:

Over the last week, people have been saying a lot about the wonderful life of Ian Murdock and his contributions to Debian and the world of free software. According to one news site, a San Francisco police officer, Grace Gatpandan, has been doing the opposite, starting a PR spin operation, leaking snippets of information about what may have happened during Ian's final 24 hours. Sadly, these things are now starting to be regurgitated without proper scrutiny by the mainstream press (note the erroneous reference to SFGate with link to SFBay.ca, this is mainstream media at its best).

The report talks about somebody "trying to break into a residence". Let's translate that from the spin-doctor-speak back to English: it is the silly season, when many people have a couple of extra drinks and do silly things like losing their keys. "a residence", or just his own home perhaps? Doesn't the choice of words make the motive sound so much more sinister? Nobody knows the full story, so snippets of information like this are not helpful.

Did they really mean to leave people with the impression that one of the greatest visionaries of Silicon Valley was also a cat burglar? That somebody who spent his life giving selflessly and generously for the benefit of the whole world (his legacy is far greater than Steve Jobs, as Debian comes with no strings attached) spends the Christmas weekend taking things from other people's houses in the dark of the night?

If having a few drinks and losing your keys in December is such a sorry state to be in, many of us could potentially be framed in the same terms at some point in our lives. That is one of the reasons I feel so compelled to write this: it is not just Ian who has suffered an injustice here, somebody else could be going through exactly the same experience at the moment you are reading this. Any of us could end up facing an assault as brutal as the tweets imply at some point in the future. At least I can console myself that as a privileged white male, the risk to myself is much lower than for those with mental illness, the homeless, transgender, Muslim or black people but as Ian appears to have discovered, that risk is still very real.

The story reports that officers made a decision to detain Ian on the grounds that he "matched the description of the person trying to break in". This also seems odd. If he had weapons or drugs or he was known to police that would have almost certainly been emphasized. Is it right to rush in and deprive somebody of their liberties without first giving them an opportunity to identify themselves and possibly confirm if they had a reason to be there?

The report goes on, "he was belligerent", "he became violent", "banging his head" all by himself. How often do you see intelligent and successful people like Ian Murdock spontaneously harming themselves in that way? How often do you see reports that somebody "banged their head", all by themselves of course, during some encounter with law enforcement? Does Ms Gatpandan really expect us to believe it is merely coincidence? Do the police categorically deny they ever gave a suspect a shove in the back, or tripped a suspect's legs such that he fell over or just made a mistake?

If any person was genuinely trying to spontaneously inflict a head injury on himself, as the police have suggested, why wouldn't the police leave them in the hospital or other suitable care? Do they really think that when people are displaying signs of such distress, rounding them up and taking them to jail will be in their best interests?

Now, I'm not suggesting that there was a pre-meditated conspiracy to harm Ian personally. Police may have been at the end of a long shift (and it is a disgrace that many US police are not paid for their overtime) or just had a rough experience with somebody far more sinister. On the other hand, there may have been a mistake, gaps in police training or an inappropriate use of a procedure that is not always justified, like a strip search, that causes profound suffering for many victims.

A select number of US police forces have been shamed around the world for a series of incidents of extreme violence in recent times, including the death of Michael Brown in Ferguson, shooting Walter Scott in the back, death of Freddie Gray in Baltimore and the attempts of Chicago's police to run an on-shore version of Guantanamo Bay. Beyond those highly violent incidents, the world has also seen the abuse of Ahmed Mohamed, the Muslim schoolboy arrested for his interest in electronics and in 2013, the suicide of Aaron Swartz which appears to be a direct consequence of the "Justice" department's obsession with him.

What have the police learned from all this bad publicity? Are they changing their methods, or just hiring more spin doctors? If that is their response, then doesn't it leave them with a big advantage over somebody like Ian who is now deceased?

Isn't it standard practice for some police to simply round up anybody who is a bit lost and write up a charge sheet for resisting arrest or assaulting an officer as insurance against questions about their own excessive use of force?

When British police executed Jean Charles de Menezes on a crowded tube train and realized they had just done something incredibly outrageous, their PR office went to great lengths to try and protect their image, even photoshopping images of Menezes to make him look more like some other suspect in a wanted poster. To this day, they continue to refer to Menezes as a victim of the terrorists, could they be any more arrogant? While nobody believes the police woke up that morning thinking "let's kill some random guy on the tube", it is clear they made a mistake and like many people (not just police), they immediately prioritized protecting their reputation over protecting the truth.

Nobody else knows exactly what Ian was doing and exactly what the police did to him. We may never know. However, any disparaging comments from the police should be viewed with some caution.

The horrors of incarceration

It would be hard for any of us to understand everything that somebody goes through when detained by the police. The recently released movie about The Stanford Prison Experiment may be an interesting place to start, a German version produced in 2001, Das Experiment, may be even better.

The United States has the largest prison population in the world and the second-highest per-capita incarceration rate.

Worldwide, there is an increasing trend to make incarceration as degrading as possible. People may be innocent until proven guilty, but this hasn't stopped police in the UK from locking up and strip-searching over 4,500 children in a five year period, would these children go away feeling any different than if they had an encounter with Jimmy Saville or Rolf Harris? One can only wonder what they do to adults.

What all this boils down to is that people shouldn't really be incarcerated unless it is clear the danger they pose to society is greater than the danger they may face in a prison.

What can people do for Ian and for justice?

Now that the spin doctors have started trying to do a job on him, it would be great to try and fill the Internet with stories of the great things Ian has done for the world. Write whatever you feel about Ian's work and your own experience of Debian.

While the circumstances of the final tweets from his Twitter account are confusing, the tweets appear to be consistent with many other complaints about US law enforcement. Are there positive things that people can do in their community to help reduce the harm?

Sending books to prisoners (the UK tried to ban this) can make a difference. Treat them like humans, even if the system doesn't.

Recording incidents of police activities can also make a huge difference, such as the video of the shooting of Walter Scott or the UK police making a brutal unprovoked attack on a newspaper vendor. Don't just walk past a violent situation and assume the police are the good guys. People making recordings may find themselves in danger, it is recommended to use software that automatically duplicates each recording, preferably to the cloud, so that if the police ask you to delete a recording (why would they?), you can let them watch you delete it and still have a copy.

Can anybody think of awards that Ian Murdock should be nominated for, either in free software, computing or engineering in general? Some, like the prestigious Queen Elizabeth Prize for Engineering can't be awarded posthumously but others may be within reach. Come and share your ideas on the debian-project mailing list, there are already some here.

Best of all, Ian didn't just build software, he built an organization, Debian. Debian's principles have helped to unite many people from otherwise different backgrounds and carry on those principles even when Ian is no longer among us. Find out more, install it on your computer or even look for ways to participate in the project.

http://danielpocock.com/ian-murdock-police-brutality


Karl Dubost: [worklog] webkit fixed! and short week.

Суббота, 02 Января 2016 г. 04:05 + в цитатник

The new year resolutions do not seem very agile to me. I usually prefer short goals, trying to tackle them. Repeat and rinse. On the other hand, this year 2016 will have a life changing event, so let's see how it goes.

Webcompat bugs

  • A form on WestJet Web site is not working properly. My tests on desktop didn't lead to reproduce the bug. So I used the remote debugging over WIFI.
  • General activity on bugs
  • This bug including issues about requestAnimationFrame is quite interesting. I haven't figured out what was really happening. It gave me the opportunity to try using a bit more the performance tool. I don't think I understand everything. But it will come one step at a time.
  • Bug 1213126 - Enable layout.css.prefixes.webkit by default. This will deserve an article by itself. We will see later, but the important news is: RESOLVED. FIXED. Respect to everyone who participated including Daniel Holbert, Mike Taylor, etc. This is very important. I'll explain why.

Webcompat dev

CSS Webkit Issues

Added this agenda item to the meeting on Tuesday. Once layout.css.prefixes.webkit; false is switched to true. What do we do with all the Webkit Firefox layout bugs? We might want to get them switched to worksforme. The same way that invalid html is working in the browser because we are coping with it. Maybe a discussion to start on webcompat mailing-list.

Misc.

  • I'm using Profiles manager to start my browser with different contexts. This is very useful for Web Compatibility, aka testing with a complete clean profile. The new about:profiles in Nightly is a good idea, but the UX is not yet on par with what we currently have. So I filed a bug for it : Bug 1235284 - Profiles manager with disabled automatic startup setup
  • Wondering which message in terms of warning is more understandable in between Opera and Firefox.
  • SHA-1 certificates being blocked in one year.

    In line with Microsoft Edge and Mozilla Firefox, the target date for this step is January 1, 2017, but we are considering moving it earlier to July 1, 2016 in light of ongoing research. We therefore urge sites to replace any remaining SHA-1 certificates as soon as possible."

  • Thought of todayTuesday: I'm learning a lot more about code and good practices when I do review for other people's code than when I code directly myself.
  • Firefox Developer Tools Revisited
  • This week, I also looked at a way to record expenses on the command line. Low tech, simple and resilient. I have seen two candidates: GNU pem and a Ledger clone. Both are missing the location or an id that I can tie to a location. Maybe the format should be: YYYY-MM-DD HH:MM amount id_shop category [currency]

Otsukare!

http://www.otsukare.info/2016/01/02/worklog-new-year


Fr'ed'eric Harper: My 3 words for 2016

Пятница, 01 Января 2016 г. 23:29 + в цитатник
A woman saying fuck everything

Creative Commons: https://flic.kr/p/drXpxr

I was impatiently waiting for 2016: I know it’s psychological, but the first of January is like a new chapter, a blank slate to start over, to do better. I don’t believe in New Year resolutions as it usually focussing on one particular thing only, it put unnecessary pressure on ourselves, and it’s just a downer when you looked back on the last year and realized you didn’t make it happen. It’s why since 2013, I’m carefully choosing three words that will help me throughout the year: they are guidelines, not the ultimate answer.

Myself

My first word is a follow-up of the last couple of months of my life as I’m clearly not done: thinking about myself. It may sound selfish, but my top priority this year will be me, myself, and I. Every aspects of who I am: I need to rediscover myself. I need to take care of myself, be happy and do the things I like, alone, but also with the people I love.

Why

This year my second word will make me act like a three years old with myself: why will be a word I’ll use all the time. I realized last year that life is too valuable and time is a scarce resource. I don’t want to dedicate a fraction of my life, even minimal, to something that isn’t important to me or didn’t align with my priorities. Before anything, I’ll ask myself, why should I do this!

Truth

I want this year to be a no bullshit policy in my life: the truth most prevail, even if it’s not pleasant. It’s not that I was lying or that people weren’t honest with me. I think that within me, a stronger voice was telling me another truth or that I didn’t want to see the reality. I guess this word goes in pair with the first one.

 

I’m looking forward to this new year as I know it can’t be worse that the last one, at least for me, and I have this overwhelming impression that it will be one of the most incredible one of my entire life, who knows? I can’t finish this article without wishing you an Happy New Year: I hope 2016 will be a reflection of your dreams. By putting myself first, asking me why I’m doing something and always being honest with myself, I know mine will be awesome…

http://feedproxy.google.com/~r/outofcomfortzonenet/~3/d9wHf0eFpCA/


Air Mozilla: Webdev Extravaganza: January 2016

Пятница, 01 Января 2016 г. 21:00 + в цитатник

Webdev Extravaganza: January 2016 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on.

https://air.mozilla.org/webdev-extravaganza-january-2016-20160101/


Mozilla Addons Blog: January 2016 Featured Add-ons

Пятница, 01 Января 2016 г. 14:55 + в цитатник

Pick of the Month: uMatrix

by Raymond Hill
uMatrix is a point-and-click matrix-based firewall, putting you in full control of where your browser is allowed to connect, what type of data it is allowed to download, and what it is allowed to execute.

“It may be the perfect advanced users control extension.”

Featured: HTTPS Everywhere

by EFF Technologists
HTTPS Everywhere protects your communications by enabling HTTPS encryption automatically on sites that are known to support it, even when you type URLs or follow links that omit the “https:” prefix.

Featured: Add to Search Bar

by Dr. Evil
Make any pages’ search functionality available in the Search Bar (or “search box”).

Featured: Duplicate Tabs Closer

by Peuj
Detects and automatically closes duplicated tabs.

Nominate your favorite add-ons

Featured add-ons are selected by a community board made up of add-on developers, users, and fans. Board members change every six months, so there’s always an opportunity to participate. Stayed tuned to this blog for the next call for applications.

If you’d like to nominate an add-on for featuring, please send it to amo-featured@mozilla.org for the board’s consideration. We welcome you to submit your own add-on!

https://blog.mozilla.org/addons/2016/01/01/january-2016-featured-add-ons/


Nick Alexander: Firefox "artifact builds" for Mac OS X

Пятница, 01 Января 2016 г. 03:00 + в цитатник

I’m thrilled to announce support for Mac OS X artifact builds. Artifact builds trade expensive compile times for (variable) download times and some restrictions on what parts of the Firefox codebase can be modified. For Mac OS X, the downloaded binaries are about 100Mb, which might take just a minute to fetch. The hard restriction is that only the non-compiled parts of the browser can be developed, which means that artifact builds are really only useful for front-end developers. The Firefox for Android front-end team has been using artifact builds with great success for almost a year (see Build Fennec frontend fast with mach artifact! and my other posts on this blog).

I intend to update the MDN documentation and the build bootstrapper (see Bug 1221200) as soon as I can, but in the meantime, here’s a quick start guide.

Quick start

You’ll need to have run mach mercurial-setup and installed the mozext extension (see Bug 1234912). In your mozconfig file, add the lines

ac_add_options --enable-artifact-builds
mk_add_options MOZ_OBJDIR=./objdir-artifact

You’ll want to run mach configure again to make sure the change is recognized. This sets --disable-compile-environment and opts you in to running mach artifact install automatically.

After this, you should find that mach build downloads and installs the required artifact binaries automatically, based off your current Mercurial commit. To test, just try

./mach build && ./mach run

After the initial build, incremental mach build DIR should also maintain the state of the artifact binaries — even across hg commit and hg pull && hg update.

You should find that mach build faster works as expected, and that the occasional mach build browser/app/repackage is required.

Restrictions

Oh, so many. Here are some of the major ones:

  • Right now, artifact builds are only available to developers working on Mac OS X Desktop builds (Bug 1207890) and Firefox for Android builds. I expect Linux support to follow shortly (tracked in Bug 1236110). Windows support is urgently needed but I don’t yet know how much work it will be (tracked in Bug 1236111).
  • Right now, artifact builds are only available to Mercurial users. There’s no hard technical reason they can’t be made available to git users, and I expect it to happen eventually, but it’s non-trivial and really needs a dedicated git-using engineer to scratch her own itch. This is tracked by Bug 1234913.
  • Artifact builds don’t allow developing the C++ source code. As soon as you need to change a compiled component, you’ll need a regular build. Unfortunately, things like Telemetry are compiled (but see tickets like Bug 1206117).
  • Artifact builds are somewhat heuristic, in that the downloaded binary artifacts may not correspond to your actual source tree perfectly. That is, we’re not hashing the inputs and mapping to a known binary: we’re choosing binaries from likely candidates based on your version control status and pushes to Mozilla automation. Binary mismatches for Fennec builds are rare (but do exist, see, for example, Bug 1222636), but I expect them to be much more common for Desktop builds. Determining if an error is due to an artifact build is a black art. We’ll all have to learn what the symptoms look like (often, binary component UUID mismatches) and how to minimize them.
  • Support for running tests is limited. I don’t work on Desktop builds myself, so I haven’t really explored this. I expect a little work will be needed to get xpcshell tests running, since we’ll need to arrange for a downloaded xpcshell binary to get to the right place at the right time. Please file a bug if some test suite doesn’t work so that we can investigate.

Troubleshooting

The command that installs binaries is mach artifact install. Start by understanding what happens when you run

./mach artifact install --verbose

See the troubleshooting section of my older blog post for more. As a last resort, the Firefox for Android MDN documentation may be helpful.

Conclusion

Thanks to Gregory Szorc (@indygreg) and Mike Hommey for reviewing this work. Many thanks to Mark Finkle (@mfinkle) for providing paid time for me to pursue this line of work and to the entire Firefox for Android team for being willing guinea pigs.

There’s a huge amount of work to be done here, and I’ve tried to include Bugzilla ticket links so that interested folks can contribute or just follow along. Dan Minor will be picking up some of this artifact build work in the first quarter of 2016.

Mozilla is always making things better for the front-end teams and our valuable contributors! Get involved with code contribution at Mozilla!

Discussion is best conducted on the dev-builds mailing list and I’m nalexander on irc.mozilla.org/#developers and @ncalexander on Twitter.

Changes

  • Thu 31 December 2015: Initial version.

Notes

http://www.ncalexander.net/blog/2015/12/31/firefox-artifact-builds-for-mac/


Support.Mozilla.Org: What’s up with SUMO – End of Year Special!

Четверг, 31 Декабря 2015 г. 22:11 + в цитатник

Hello, SUMO Nation!

This is it, the final bell, the curtain going down… for 2015! Let’s take a look at what took place and peek beyond the curtain hiding 2016 ;-). Grab a glass of your favourite drink and read along.

It has been quite a ride, and we have seen a fair share of great and not-so-great things happen on and around the Web. Thank you for sticking with us through thick & thin! Your continuous presence and activity is what keeps everyone in the community going and people around the world happy with their Mozilla-flavoured open source experience. The Web needs you!

Contributors of the week year

  • Every single one of you. Just like last year :-) We salute you!

SUMO Community Meetings

  • Reminder: if you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Monday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.

Developers / Kitsune

Community / Events




  • We had the great honour and privilege to meet some of you in person during several events, big and small, across the globe. We hope to see more of you in 2016!
  • We saw several communities around the world organizing great events that highlighted the great work done by you under SUMO’s banner. Whether in India, Taiwan, Ivory Coast, Bangladesh or Bulgaria, you made sure that the inspirational tale of SUMO was heard and received. We are looking forward to more events organized by you!
  • Ongoing reminder: if you think you can benefit from getting a second-hand device to help you with contributing to SUMO, you know where to find us.

Support Forum / Social Networks

  • Thousands of users received your patient, understanding, and kind help. You did not falter despite many launches, last-minute changes and unexpected news. You are the helpful voices at the heart of SUMO – thank you!
  • We could write only about Army of Awesome on Twitter… But that wouldn’t be the full picture :-) Many of you have gone above and beyond, helping confused or angry users on Facebook, IRC, and in all the other places that the denizens of the Web gather and mingle. You bring help where nobody expects it – thank you!

Knowledge Base / Localization

What’s your story from 2015?

The world of Mozilla and SUMO spreads far and wide, way beyond the pages of this blog – chances are you have SUMO stories that you’d like to share with us – please do so, in the comments below this post – we’re looking forward to hearing from you!

Here’s to a great 2016! May it bring you many happy moments – and we hope some of them will be connected with Mozilla and SUMO :-) Take care, friends!

https://blog.mozilla.org/sumo/2015/12/31/whats-up-with-sumo-end-of-year-special/


Rob Wood: Posting to Treeherder from Jenkins

Четверг, 31 Декабря 2015 г. 22:01 + в цитатник

Introduction

The Firefox OS post-commit Raptor performance automated tests currently run on real FxOS remote devices, driven by Jenkins jobs. The performance results are published on the raptor dashboard. In order to add more visibility for these tests, and gaia performance testing in general, we decided to also post raptor performance test data on the Mozilla treeherder dashboard.

If the raptor automation was running inside of taskcluster, posting to treeherder would be taken care of easily via the actual taskcluster task-graph. However when running automation on jenkins, in order to post data on treeherder it does take a bit of extra work. This blog post will tell you how to do it!

For the purpose of this blog I will summarize the steps I took, but I strongly encourage you to refer to the treeherder docs if you run into problems or just want more detailed information. Thanks to the treeherder team for the help they gave me along the way.

Prerequisites

It is assumed that you already have a linux development environment setup (I am running Ubuntu 14.04 x64). Posting to treeherder can be done via a treeherder python client or a node.js client; this blog post only provides direction for using the python client. It is assumed that your development environment already has python 2.7.9+ already installed (2.7.9+ is required for authentication).

Virtual Box and Vagrant are also required for the development environment. If you don’t have them installed, please refer to the Virtual Box downloads site and Vagrant Docs on how to install them.

Setup the Development Environment

In order to develop and test your submission code to post data to treeherder, you need to setup the development environment.

Clone the Repository

git clone https://github.com/mozilla/treeherder.git

Install Treeherder Client

pip install treeherder-client

You will also need these supporting packages:

pip install mozinfo boto

Start Vagrant

Instead of posting to the live treeherder site during your development, it’s best to run your own local treeherder instance. To do that, you need to start vagrant. First add the following IP entry to your /etc/hosts file:

192.168.33.10 local.treeherder.mozilla.org

Then from  your local treeherder github repository directory, start it up:

~/treeherder$ vagrant up

Wait for vagrant to boot up, it can take several minutes. Watch for errors; I had a couple of errors on first time startup and had to rebuild my local virtualbox (sudo /etc/init.d/vboxdrv setup) and install this package:

sudo apt-get install nfs-kernel-server

Start Local Treeherder Instance

Now that vagrant is running we can startup our local treeherder instance! Open a terminal and from your treeherder repository folder, SSH into your vagrant:

~/treeherder$ vagrant ssh

Inside the vagrant vm, startup treeherder:

vagrant ~/treeherder $ ./bin/run_gunicorn

Wait for that puppy to start, and then in Firefox go to your local instance URL:

http://local.treeherder.mozilla.org

You’ll see your local treeherder instance dashboard, however you’ll notice there’s no data! In order to receive real live treeherder data on your local test instance, you need to ‘ingest’ the data.

Ingest Live Data

In another terminal session ssh into vagrant and start the worker, as follows:

~/treeherder$ vagrant ssh

vagrant ~/treeherder $ celery -A treeherder worker -B –concurrency 5

Now if you want a minute and then refresh your local treeherder instance in Firefox, you’ll see some live data magically appear!

Local Treeherder Credentials

In order to test posting data to treeherder you need credentials, even if just posting to your local treeherder instance. To generate credentials for your local treeherder instance, there’s a command to run inside a vagrant ssh session, as follows (replace ‘test-client-id’ with your desired test client id):

~/treeherder$ vagrant ssh

vagrant ~/treeherder $ ./manage.py create_credentials test-client-id treeherder@mozilla.com “Local instance test credentials”

The generated credentials will be displayed on the console. Be sure to record your client-id and resulting secret somewhere.

If you had any issues with setting up your development environment, I encourage you to review the setup info in the treeherder docs for further details.

Develop and Test Submission Code

Now that you have your development environment setup and your local treeherder instance up and running, you’re ready to start developing your submission code. The submission code will ultimately run as part of your jenkins build.

Basically you need to write code that builds treeherder job data and submits it, using the treeherder client.

Submission Code Example

Henrik on the Engineering Productivity team pointed me towards the treeherder submission code from the Mozmill-ci automation framework. I grabbed the submission code from there as a start, created my own python project, and modified the code as required to meet the needs for raptor. If you like, clone or fork my Raptor-Post Github Repo to use as a starting point, and modify it accordingly for your project.

The python script that I run to submit to treeherder is here. I’ll explain the code a little bit further now.

Treeherder Job Data

Each treeherder data set contains job data, which includes:

Product name: As it sounds, we use ‘Raptor’

Repository: The Mozilla repo that you ran your tests against, and the tree that you want your treeherder results to be published to (i.e. ‘b2g-inbound’)

Platform: This is listed next to your treeherder results group (i.e. ‘B2G Raptor opt’)

Group name: Summary name for your test data (i.e. ‘Raptor Performance’). This appears when you mouse over your group symbol on treeherder.

Group symbol: The name of the group to hold all of your job results. For example for the Raptor coldlaunch test we use a group symbol ‘Coldlaunch’. The actual jobs on treeherder will appear inside brackets following the group symbol.

Job name: Name for the individual treeherder job. For Raptor, we use the name of the gaia app that the performance test was run against (i.e. ‘clock’). This name appears when you mouse over the job symbol on treeherder.

Job symbol: The job code on treeherder for the actual job result. This is the symbol that will turn colour based on the job results, and when moused over the job state and result will appear. For Raptor we use a three-letter code for the gaia app that was tested (i.e. ‘clk’).

In the Raptor treeherder submission code, the job details are stored in a configuration file here.

Revision hash: This is the gecko version hash, for the version of gecko that your tests were run against. This is how treeherder finds your dataset to post to – your submission code will add a group and job to the already existing treeherder dataset for the specified gecko revision. In my submission code, this is where the revision hash is retrieved.

In the Raptor treeherder submission code, this is where it creates the initial treeherder job dataset.

Treeherder Job State

In order to have your jenkins job appear on treeherder like other jobs do, with a status of ‘running’ and then followed by a status of ‘completed’, you need to post the treeherder data set once when you are starting your build on jenkins, and then post again at the end of your jenkins job.

For Raptor, just before starting the performance test I submit the treeherder data set, with:

job.add_state(running)

Then after the performance test has finished then I post the treeherder data set again, this time with:

job.add_state(completed)

Treeherder Job Result

For Raptor, at the start the treeherder job has a status of ‘running’. After the performance test is finished, I submit the dataset to treeherder again but in one of three states: busted, passed, or failed. The state is specified by adding the following to your treeherder job dataset (example for a passed job):

job.add_result(‘SUCCESS’)

if the test didn’t finish for some reason (i.e. test itself is incomplete, timed out, etc) then I post the ‘completed’ state with a result of ‘BUSTED’.

If the Raptor performance test completed successfully, I have code that checks the results. If result is deemed to be a pass, then I post the ‘completed’ state with a result of ‘SUCCESS’.

If the Raptor performance test completed successfully, and the results code determines that the result itself indicates a performance regression, then I post the ‘completed’ state with a result of ‘TESTFAILED’.

This is the part of the code that checks the Raptor results and determines the treeherder job result.

In my submission code, this is where the actual treeherder job status is set. For more detailed information about building the treeherder job dataset, see the Job Collections section on the treeherder docs.

Test on Local Instance

Once your submission code is ready, test it out first by submitting to your local treeherder instance. To do that, just specify the treeherder URL to be:

http://local.treeherder.mozilla.org/

Use your local treeherder credentials that you previously generated above. For example, this is how the command line looks for my submission code, to submit to my local treeherder instance:

(raptor-th)rwood@ubuntu:~/raptor-post$ ./submit-to-treeherder.py –repository=b2g-inbound –treeherder-url=http://local.treeherder.mozilla.org/ –treeherder-client-id=xxx –treeherder-secret=xxx –test-type=cold-launch –app-name=’clock’ –app_symbol=’clock’ –build-state=completed

Debugging

When you run into submission errors while testing your code, there are a couple of log files that might give you more info to help debug.

/var/log/gunicorn/treeherder_error.log

vagrant /var/log/treeherder $ ls
treeherder.log  treeherder.log.1  treeherder.log.2
Or in the vagrant SSH when starting up the treeherder instance just do:
vagrant ~/treeherder $ ./bin/run_gunicorn | grep error

Request Credentials for Staging and Production

In order to submit data to live treeherder staging and production, you need credentials. To find out how to request credentials, see Managing API Credentials in the treeherder docs.

Test Posting to Treeherder Staging

Now at this point your submission code should be working great and posting successfully to your local treeherder instance. The next step, now that you have your treeherder credentials, is to test posting to live treeherder staging. Simply run your submission code locally as you did before, but change your submission URL to point to treeherder staging:

https://treeherder.allizom.org/

Use the credentials for live treeherder staging, as you requested above.

Add Submission Code to Jenkins

This is a given; in Jenkins just add an execute shell step to clone your github repo where your submission code lives. Then add a managed shell script (or execute shell step) that uses your submission code the same way you have tested it locally. Ensure you are posting to the treeherder staging URL first (https://treeherder.allizom.org/), NOT production.

You may not want to fail your entire jenkins build if submitting to treeherder fails for some reason. For Raptor, if the performance tests have completed successfully but submitting to treeherder fails, I don’t want the jenkins build to fail because the performance tests passed. Therefore, in my managed script (or in your execute shell step), finish with this line:

exit 0

Then your jenkins submission code step will always return success and won’t fail the jenkins build.

Example Jenkins Submission Code

The following is my submission code that I run in a jenkins managed script. Note that it receives two parameters, the build state (i.e. ‘running’ or ‘completed’) and the name of the gaia app that the performance test was run against. Note that the secret is stored as a jenkins masked password.

#!/bin/bash -x

RAPTOR_BUILD_STATE=$1
RAPTOR_TREEHERDER_APP_NAME=$2

cd raptor-post
./submit-to-treeherder.py \
–repository b2g-inbound \
–build-state ${RAPTOR_BUILD_STATE} \
–treeherder-url https://treeherder.allizom.org/ \
–treeherder-client-id xxx \
–treeherder-secret ${TREEHERDER_SECRET} \
–test-type cold-launch \
–app-name ${RAPTOR_TREEHERDER_APP_NAME}

exit 0

Test from Jenkins to Treeherder Staging

Let your jenkins builds run (or retrigger builds) and verify the results are being posted to live treeherder staging. They will look the same as they did when you tested posting to treeherder staging from your local machine. If there are errors you may need to fix something in your jenkins execute shell or managed script.

Server Clock Time

One issue that I ran into, submitting to my local treeherder instance was working great, however submitting to the live treeherder staging from jenkins was failing with the following error:

13:50:05 requests.exceptions.HTTPError: 403 Client Error: FORBIDDEN for url

Turns out the problem is that the server time on the jenkins node was off by 13 minutes. If the jenkins node time is off from the treeherder server time by more than 60 seconds, then authentication will fail; so be sure that your jenkins server time is correct.

Switch to Treeherder Production

Be sure to test posting from jenkins to treeherder staging for a good while before going live on production. You may want to ask someone in your team for a final review of your submission code, and to approve how the dataset looks on treeherder. Once you are happy with it, and want to switch to production, all you do is update your jenkins code to use the treeherder production URL instead of staging:

https://treeherder.mozilla.org/

I hope that you found this blog post useful. If you have any questions about my submission code feel free to contact me. Happy submitting!

http://robwood.zone/posting-to-treeherder-from-jenkins/


Benjamin Kerensa: Glucosio in 2016

Четверг, 31 Декабря 2015 г. 19:00 + в цитатник

Happy New Year, friends!

Our core team and contributors have much to be proud about reflecting on the work we did in the past few months. While there are many things to be proud of, I think one of the biggest accomplishments was we built an open source project and released a product to Google Play in under four months. We then went on to do four more releases and are growing our user base internationally on a daily basis.

Glucosio for AndroidGlucosio for Android

We have had an astounding amount of coverage from the media about the vision we have for Glucosio and how we can use open source software to not only help people with diabetes improve their outcomes but further research through anonymous crowdsourcing.

I’m proud of the work our core team has put in over the past few months and excited what the new year has in store for us as a project. One big change next year is we will be formally be under the leadership of a non-profit foundation (Glucosio Foundation) which should help us be more organized but also have the financial and legal structure we need to grow as a project and deliver on our vision.

I’ve been able to meet and talk with third parties like Dexcom, Nightscout Foundation and many others including individual developers, researchers and other foundations who are very interested in the work we are pioneering and are interested in partnering, supporting or collaborating with Glucosio.

One exciting thing we hope to kick off in the New Year are Diabetes Hack Days, where organizers around the world can host hack days in their community to get people to come together to hack on software and hardware projects that will spur new innovation and creativity around diabetes technology. Most importantly though, we are very excited to launch our API to researchers next year so they can begin extracting anonymized data from our platform to help further their diabetes research.

We also look forward to releasing Glucosio for iOS in the first quarter of 2016 which has had a lot of interest and been under development for a couple months now.

In closing, we would like to invite developers, translators, and anyone else to get in touch and get connected with our project and start contributing to the vision we have of amazing open source software to help people with diabetes. We’d also ask you to consider a donation to the project, which will help us in our launch of our iOS in Q1 of 2016, and help us more rapidly produce features by offering bounties via BountySource and expand into a more mature open source project.

http://feedproxy.google.com/~r/BenjaminKerensaDotComMozilla/~3/twRgYXllIzU/glucosio-in-2016


QMO: Firefox 44.0 Beta 6 Testday, January 8th

Четверг, 31 Декабря 2015 г. 12:18 + в цитатник

Hello Mozillians,

We are happy to announce that Friday, January 8th, we are organizing Firefox 44.0 Beta 6 Testday. We will be focusing our testing on the new Push Notifications feature. Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better! See you on Friday!

https://quality.mozilla.org/2015/12/firefox-44-0-beta-6-testday-january-8th/



Поиск сообщений в rss_planet_mozilla
Страницы: 472 ... 228 227 [226] 225 224 ..
.. 1 Календарь