-Поиск по дневнику

Поиск сообщений в rss_planet_mozilla

 -Подписка по e-mail

 

 -Постоянные читатели

 -Статистика

Статистика LiveInternet.ru: показано количество хитов и посетителей
Создан: 19.06.2007
Записей:
Комментариев:
Написано: 7

Planet Mozilla





Planet Mozilla - https://planet.mozilla.org/


Добавить любой RSS - источник (включая журнал LiveJournal) в свою ленту друзей вы можете на странице синдикации.

Исходная информация - http://planet.mozilla.org/.
Данный дневник сформирован из открытого RSS-источника по адресу http://planet.mozilla.org/rss20.xml, и дополняется в соответствии с дополнением данного источника. Он может не соответствовать содержимому оригинальной страницы. Трансляция создана автоматически по запросу читателей этой RSS ленты.
По всем вопросам о работе данного сервиса обращаться со страницы контактной информации.

[Обновить трансляцию]

Asa Dotzler: Foxtrot Update Jan 2015

Среда, 07 Января 2015 г. 00:46 + в цитатник

Hey folks. Welcome to 2015!

About half of the people participating in the Foxtrot program have still not received Flame phones. I’m sorry about that. I’m hard at work on this but it depends on getting the right builds on the phones. We aren’t going to send our Foxtrot testers new phones with builds that brick the device randomly and we don’t yet have the process set up to deliver only functional updates.

We have delivered a couple thousand Flames to our very brave “unstable nightly” testers and developers, and thousands of others have purchased Flames to develop against, but we’re still behind on getting our “stable nightly” Foxtrot builds and update channel ready. As soon as that’s are ready, we’ll flash the remaining 100 or so Foxtrot phones and send them out to contributors.

I don’t have an ETA, but it’s a high priority.

(If you have not received an email update on Foxtrot, check your spam filter. All Foxtrot participants were automatically signed up for a mailing list for just this kind of update and information. Unfortunately, it appears that some people have removed themselves from the mailing list and others are not seeing the mails because they are ending up in a spam filter. If you removed yourself from the list, I dropped your entry from the Foxtrot program with the assumption that you were no longer interested. If you remember removing yourself from the mailing list but didn’t want to be removed from the program, you can email me and I’ll get you added back. The list is not optional though. It’s how we’ll contact you for program feedback.)

Also, it is very likely that we’ll be expanding the program to several hundred more people over the coming weeks. Stay tuned here for updates on how you can join.

http://asadotzler.com/2015/01/06/foxtrot-update-jan-2015/


Code Simplicity: How to Handle Code Complexity in a Software Company

Вторник, 06 Января 2015 г. 22:00 + в цитатник

Here’s an obvious statement that has some subtle consequences:

Only an individual programmer can resolve code complexity.

That is, resolving code complexity requires the attention of an individual person on that code. They can certainly use appropriate tools to make the task easier, but ultimately it’s the application of human intelligence, attention, and work that simplifies code.

So what? Why does this matter? Well, to be clearer:

Resolving code complexity usually requires detailed work at the level of the individual contributor.

If a manager just says “simplify the code!” and leaves it at that, usually nothing happens, because (a) they’re not being specific enough, (b) they don’t necessarily have the knowledge required about each individual piece of code in order to be that specific, and (c) part of understanding the problem is actually going through the process of solving it, and the manager isn’t the person writing the solution.

The higher a manager’s level in the company, the more true this is. When a CTO, Vice President, or Engineering Director gives an instruction like “improve code quality” but doesn’t get much more specific than that, what tends to happen is that a lot of motion occurs in the company but the codebase doesn’t significantly improve.

It’s very tempting, if you’re a software engineering manager, to propose broad, sweeping solutions to problems that affect large areas. The problem with that approach to code complexity is that the problem is usually composed of many different small projects that require detailed work from individual programmers. So, if you try to handle everything with the same broad solution, that solution won’t fit most of the situations that need to be handled. Your attempt at a broad solution will actually backfire, with software engineers feeling like they did a lot of work but didn’t actually produce a maintainable, simple codebase. (This is a common pattern in software management, and it contributes to the mistaken belief that code complexity is inevitable and nothing can be done about it.)

So what can you do as a manager, if you have a complex codebase and want to resolve it? Well, the trick is to get the data from the individual contributors and then work with them to help them resolve the issues. The sequence goes roughly like this:

  1. Ask each member of your team to write down a list of what frustrates them about the code. The symptoms of code complexity are things like emotional reactions to code, confusions about code, feeling like a piece will break if you touch it, difficulties optimizing, etc. So you want the answers to questions like, “Is there a part of the system that makes you nervous when you modify it?” or “Is there some part of the codebase that frustrates you to work with?”

    Each individual software engineer should write their own list. I wouldn’t recommend implementing some system for collecting the lists—just have people write down the issues for themselves in whatever way is easiest for them. Give them a few days to write this list; they might think of other things over time.

    The list doesn’t just have to be about your own codebase, but can be about any code that the developer has to work with or use.

    You’re looking for symptoms at this point, not causes. Developers can be as general or as specific as they want, for this list.

  2. Call a meeting with your team and have each person bring their list and a computer that they can use to access the codebase. The ideal size for a team meeting like this is about six or seven people, so you might want to break things down into sub-teams.

    In this meeting you want to go over the lists and get the name of a specific directory, file, class, method, or block of code to associate with each symptom. Even if somebody says something like, “The whole codebase has no unit tests,” then you might say, “Tell me about a specific time that that affected you,” and use the response to that to narrow down what files it’s most important to write unit tests for right away. You also want to be sure that you’re really getting a description of the problem, which might be something more like “It’s difficult to refactor the codebase because I don’t know if I’m breaking other people’s modules.” Then unit tests might be the solution, but you first want to narrow down specifically where the problem lies, as much as possible. (It’s true that almost all code should be unit tested, but if you don’t have any unit tests, you’ll need to start off with some doable task on the subject.)

    In general, the idea here is that only code can actually be fixed, so you have to know what piece of code is the problem. It might be true that there’s a broad problem, but that problem can be broken down into specific problems with specific pieces of code that are affected, one by one.

  3. Using the information from the meeting, file a bug describing the problem (not the solution, just the problem!) for each directory, file, class, etc. that was named. A bug could be as simple as “FrobberFactory is hard to understand.”

    If a solution was suggested during the meeting, you can note that in the bug, but the bug itself should primarily be about the problem.

  4. Now it’s time to prioritize. The first thing to do is to look at which issues affect the largest number of developers the most severely. Those are high priority issues. Usually this part of prioritization is done by somebody who has a broad view over developers in the team or company. Often, this is a manager.

    That said, sometimes issues have an order that they should be resolved in that is not directly related to their severity. For example, Issue X has to be resolved before Issue Y can be resolved, or resolving Issue A would make resolving Issue B easier. This means that Issue A and Issue X should be fixed first even if they’re not as severe as the issues that they block. Often, there’s a chain of issues like this and the trick is to find the issue at the bottom of the stack. Handling this part of prioritization incorrectly is one of the most common and major mistakes in software design. It may seem like a minor detail, but in fact it is critical to the success of efforts to resolve complexity. The essence of good software design in all situations is taking the right actions in the right sequence. Forcing developers to tackle issues out of sequence (without regard for which problems underlie which other problems) will cause code complexity.

    This part of prioritization is a technical task that is usually best done by the technical lead of the team. Sometimes this is a manager, but other times it’s a senior software engineer.

    Sometimes you don’t really know which issue to tackle first until you’re doing development on one piece of code and you discover that it would be easier to fix a different piece of code first. With that said, if you can determine the ordering up front, it’s good to do so. But if you find that you’d have to get into actually figuring out solutions in order to determine the ordering, just skip it for now.

    Whether you do it up front or during development, it’s important that individual programmers do realize when there is an underlying task to tackle before the one they have been assigned. They must be empowered to switch from their current task to the one that actually blocks them. There is a limit to this (for example, rewriting the whole system into another language just to fix one file is not a good use of time) but generally, “finding the issue at the bottom of the stack” is one of the most important tasks a developer has when doing these sorts of cleanups.

  5. Now you assign each bug to an individual contributor. This is a pretty standard managerial process, and while it definitely involves some detailed work and communication, I would imagine that most software engineering managers are already familiar with how to do it.

    One tricky piece here is that some of the bugs might be about code that isn’t maintained by your team. In that case you’ll have to work appropriately through the organization to get the appropriate team to take responsibility for the issue. It helps to have buy-in from a manager that you have in common with the other team, higher up the chain, here.

    In some organizations, if the other team’s problem is not too complex or detailed, it might also be possible for your team to just make the changes themselves. This is a judgment call that you can make based on what you think is best for overall productivity.

  6. Now that you have all of these bugs filed, you have to figure out when to address them. Generally, the right thing to do is to make sure that developers regularly fix some of the code quality issues that you filed along with their feature work.

    If your team makes plans for a period of time like a quarter or six weeks, you should include some of the code cleanups in every plan. The best way to do this is to have developers first do cleanups that would make their specific feature work easier, and then have them do that feature work. Usually this doesn’t even slow down their feature work overall. (That is, if this is done correctly, developers can usually accomplish the same amount of feature work in a quarter that they could even if they weren’t also doing code cleanups, providing evidence that the code cleanups are already improving productivity.)

    Don’t stop normal feature development entirely to just work on code quality. Instead, make sure that enough code quality work is being done continuously that the codebase’s quality is always improving overall rather than getting worse over time.

If you do those things, that should get you well on the road to an actually-improving codebase. There’s actually quite a bit to know about this process in general—perhaps enough for another entire book. However, the above plus some common sense and experience should be enough to make major improvements in the quality of your codebase, and perhaps even improve your life as a software engineer or manager, too.

-Max

P.S. If you do find yourself wanting more help on it, I’d be happy to come speak at your company. Just let me know.

http://www.codesimplicity.com/post/how-to-handle-code-complexity/?utm_source=rss&utm_medium=rss&utm_campaign=how-to-handle-code-complexity


Doug Belshaw: Radical participation: a sm"orgasbord

Вторник, 06 Января 2015 г. 20:03 + в цитатник

Today and tomorrow I’m at Durham University’s eLearning conference. I’m talking on Radical Participation – inspired, in part, by Mark Surman’s presentation at the Mozilla coincidental workweek last month.

My slides should appear below. If not, click here!

I was very impressed by Abbi Flint’s keynote going into the detail of her co-authored Higher Education Academy report entitled Engagement Through Partnership: students as partners in learning and teaching in higher education. In fact, I had to alter what I was going to say as she covered my critique! Marvellous.

After Abbi’s keynote I was involved in a panel session. I didn’t stick too closely to my notes, instead giving more of a preview to what I’m talking about in my keynote tomorrow. As ever, I’m genuinely looking forward to some hard questions!

http://dougbelshaw.com/blog/2015/01/06/radical-participation/


Armen Zambrano: Tooltool fetching can now use LDAP credentials from a file

Вторник, 06 Января 2015 г. 19:45 + в цитатник
You can now fetch tooltool files by using an authentication file.
All you have to do is append "--authentication-file file" to your tooltool fetching command.

This is important if you want to use automation to fetch files from tooltool on your behalf.
This was needed to allow Android test jobs to run locally since we need to download tooltool files for it.


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

http://feedproxy.google.com/~r/armenzg_mozilla/~3/PKmxq0hcE-0/tooltool-fetching-can-now-use-ldap.html


Jorge Villalobos: Interview with Extension.Zone

Вторник, 06 Января 2015 г. 19:39 + в цитатник

I was recently approached by Extension.Zone for an interview. I was pleasantly surprised to see a new website dedicated to browser-agnostic reporting of add-ons. Then I was just plain surprised that .zone is now a thing.

Anyway, the interview is up here. There are some interesting questions about what makes the Firefox add-on ecosystem different than others, and what I think is an under-explored area in add-on development.

http://xulforge.com/blog/2015/01/interview-with-extension-zone/


Marco Zehe: Apple are losing their edge also in accessibility quality

Вторник, 06 Января 2015 г. 16:30 + в цитатник

Over the past couple of days, a number of well-known members in the Apple community raised their voices in concern about Apple’s general decline in software quality. Marco Arment (former “Mr. Instapaper” and now “Mr. Overcast”) started out by saying that Apple has lost the functional high ground. John Gruber of Daring Fireball slightly disagrees, but says that Apple have created a perception that “Other people’s stuff doesn’t work, but Apple’s stuff doesn’t work, either”. And finally, Dr. Drang looks at the power of leverage in this context. And now, well, here is my take on the subject.

Some long-standing readers of this blog may recall this post I wrote in June of 2009 about my first experience using an iPhone. It was the first time I interacted with a touch screen device that was accessible to me as a blind user.

For several years to come, Apple would lead in terms of including accessibility features into both its mobile and desktop operating systems. Zoom had already been there when VoiceOver was introduced in iOS 3.0, and what followed were features for people with varying disabilities and special needs. Assistive Touch, which allows gestures to be performed differently, mono audio and integration with hearing aids, sub titling, audio description and other media accessibility enhancements, Guided Access for people with attention deficiencies, Siri, and most recently, Braille input directly on the touch screen in various languages and grades. Especially on iOS, VoiceOver and the other accessibility features received updates every year with every major release, and new features were added.

In the beginning, especially in Snow Leopard and Lion, Apple also did the same for OS X. It gradually also added many of the features it had added to iOS to OS X to keep them in sync. But ever since Mountain Lion, VoiceOver did not see much improvement any more. In fact, the lack of newly introduced features could lead one to the perception that Apple thinks that VoiceOver is done, and no new features need to be added.

But, and I haven’t said this for the first time on this blog, the quality of existing features is steadily declining, too. In fact, with the release of both OS X 10.10 “Yosemite” and iOS 8, the quality of many accessibility features has reached a new all-time low. AppleVis has a great summary of current problems in iOS 8. But let me give you two examples.

The first problem is so obvious and easily reproducible that it is hard to imagine Apple’s quality assurance engineers didn’t catch this, and that is on the iPhone in Safari, when going back from one page to the previous one with the Back button. When VoiceOver is running, I haven’t found a single page where this simple action did not trigger a freeze in Safari and VoiceOver. This was in early betas of iOS 8, and it is still not fixed in the 8.1.2 release several months later.

The second example concerns using Safari (again) with VoiceOver, but this time on the iPad. Using Safari itself, or any application that uses one of the two WebView components, I am reliably able to trigger a full restart of the iPad at least twice a day, most days even more often. That causes all apps to quit, sometimes without being able to save their stuff, it interrupts work, and it leaves the iPad in a semi-unstable state that it is better to fully shut it down and restart it fresh.

“Wait”, you might say, “this sounds like a problem from iOS 7 days, and wasn’t it fixed?” Yes, I would reply, it was, but it returned in full force in iOS 8. But mostly on the iPad. I think I’ve only seen one or two restarts on my iPhone since iOS 8 came out.

The first of these two examples is such low-hanging fruit that I, if I was working at Apple, would be deeply ashamed that this is still around. The second one is harder, but not so hard that an engineer sitting down for a day and using the device with VoiceOver enabled wouldn’t run into it.

And now back to Yosemite. I again concentrate on Safari + VoiceOver, since this is where I spend a lot of my time. Support has regressed so badly especially on dynamic pages that it is barely possible to use Facebook on Yosemite with VoiceOver. VoiceOver skips over whole stories, loses focus, and does all sorts of other funky stuff. And no, not even the newest public beta of 10.10.2, which is supposed to contain VoiceOver fixes, addresses these problems. Moreover, editing in any form field on the web is so slow and double-speaks that it is not really possible to do productive work there. And if you have a braille display connected, expect it to drop out every few seconds when moving the cursor. The sounds VoiceOver makes are the equivalent of plugging and unplugging a USB braille display every 3 to 4 seconds.

All of these problems have been reported to Apple, some by multiple users. They were tweeted about publicly, and now I am reiterating over them again to show my support for Marco, John, and others who assert rightly that Apple has a real quality problem on their hands, which higher management seems to be quite thick-skinned about. Blinded by their own brilliant marketing or something? ;)

Apple does have a fantastic accessibility story. No other operating system I know has so many features for such a big variety of people built-in (speaking mostly for iOS now). But they’re on the verge of badly trumping that trust many people with disabilities put in them by delivering such poor quality updates that make it virtually impossible to take advantage of these features in full force. Especially when such basic functionality as I describe in Safari, and AppleVis summarize on their blog, are getting in the way of use every minute of every day now. And Apple really need to be careful that others may catch up sooner rather than later. On the web, the most robust accessibility is already being delivered by a different desktop browser/screen reader combination on a different operating system. As for mobile: Android is the lesser of the competition, even in its latest update, in my opinion. But Microsoft’s foundation is really solid in Windows Phone 8.1. They just need to execute on it much better, and they could really kick ass and become a viable alternative to Apple on mobile.

So here is my appeal to Tim Cook, CEO of Apple: Put action behind these words again! Go to these extraordinary lengths you speak of by not just cranking out new features that are half-baked, but make sure your engineers work on the over-all quality in a way that does not make your most faithful users feel like they’re being let down by your company! Because that is, exactly, how it feels. This trend started more strongly in iOS 7, and even worsened in iOS 8. And it has been with OS X even longer, starting in Mountain Lion and worsened ever since. Please, show us, our friends and family who started using your products because of our recommendations, and those of us who took a leap of faith early on and put our trust, our productivity, our daily lives in your products, that these are not just empty words and that that award you received actually means something beyond the minutes you gave that speech!

Sincerely,

a caring, but deeply concerned, user

http://www.marcozehe.de/2015/01/06/apple-are-losing-their-edge-also-in-accessibility-quality/


Bogomil Shopov: Coworking spaces in Brussels for Fosdem

Вторник, 06 Января 2015 г. 11:22 + в цитатник

If you are heading to BXL for Fosdem and if you arrive earlier as I do you may need a coworking space. Thanks to twitter and some followers I got two nice hints I want to share with you. They both offer a free day for open source enthusiasts.

See you there: Beta cowork  and Transforma BXL

P.S If you need hotel or information about Brussels, check out my post here.

http://talkweb.eu/coworking-spaces-in-brussels-for-fosdem/


Byron Jones: happy bmo push day!

Вторник, 06 Января 2015 г. 09:23 + в цитатник

the following changes have been pushed to bugzilla.mozilla.org:

  • [1027903] Please create a WebOps Request Form in Bugzilla
  • [1113630] Set window.opener to null for the URL field to prevent interaction between a remote script and the bug report
  • [696726] automatically create “watch users” for components
  • [1117246] X-Bugzilla- headers included in bugmail message-body aren’t grayed out in review/needinfo request emails, in Thunderbird, due to lack of a space after “–” separator
  • [1050232] Improve layout of guided bug entry product selection
  • [1117599] CVE ID format change: CVE-\d{4}-\d{4} becomes CVE-\d{4}-\d{4,} this year

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

https://globau.wordpress.com/2015/01/06/happy-bmo-push-day-123/


Nigel Babu: Non-unified Builds on Try

Вторник, 06 Января 2015 г. 07:40 + в цитатник

Last week, I was trying to fix a non-unified build bustage with glandium’s help and I kept failing to fix it in mozilla-inbound. I don’t usually build locally myself, so I had to push to Try. And there was no documentation on how-to do it. It turns out that it needs a custom mozconfig override (thanks dbaron).

You need to add ac_add_options --disable-unified-compilation to the file build/mozconfig.common.override in one of your pushes. Probably easier, if it’s the tip. We updated the wiki with this small piece of documentation.

This is mostly a note to myself. And perhaps for contributors who might search for it.

http://nigelb.me/2015-01-06-non-unified-builds-on-try.html


Nick Alexander: We should build Firefox for Android with an external build system

Вторник, 06 Января 2015 г. 03:20 + в цитатник

For some time, I have been advocating for building Fennec with a full-featured Android build system. At one time, the state of the art was Ant; now, the solution blessed by Google is Gradle [1]; both Facebook (buck) and Twitter (pants) have built and published open-source build systems; and Google (again!) has built a separate proprietary internal build system (blaze) of their own. The major lesson I take from the proliferation of these tools is:

The best organizations invest in their tooling to make their people more productive.

However, this needs to be countered with the inescapable reality that build systems, and especially Android build systems, are big investments.

As the lone Firefox for Android build peer, I do not support Mozilla making a big investment into building our own Android build system.

For the sake of argument, however, such a Mozilla-developed build system would look like a blend of custom moz.build and a recursive Make backend. The amount of new and un-tested build system code would be very high; we’d incur serious risk, because all that new code needs to support a delicate combination of features that are constantly moving; and in all likelihood we wouldn’t achieve even 10% of what the existing solutions achieve in a calendar year [2].

Rather than diving deep into a particular illustration of a feature that would be difficult to express in such a Mozilla-developed build system, let me give instead a whirlwind tour of features we could take advantage of — today — if we used an existing solution.

Caveat: I have evaluated Gradle and buck extensively. I have not evaluated pants. (Blaze is not available.) Not everything claimed will be true for all external build systems.

Wins for building Fennec

Real Android libraries, with resources and manifests.

These libraries could, at first, just be third-party libraries that we want to build against. (We’ve so far held off using, or modified to be "part of" Fennec, any third-party libraries that reference Android resources.)

In future, we could build Fennec as a set of loosely-coupled libraries. (In some ways we do: GeckoView, Stumbler, Search Activity, Background Services… we’re doing this in a limited way.) Such small libraries let us test faster and parallelize the development process.

There’s a possibility to reduce risk here, too: I claim that we can improve our existing practice of landing new features behind a build flag by developing mostly self-contained Android libraries and then including those libraries behind a build flag. This lets developers land the code in the tree, and possibly even compile and unit test on TBPL, but not actually ship in Fennec until the build flag is flipped.

Merging resources and manifests is a solved problem; let’s use an existing solution to do it.

Faster builds.

Our existing build system supports compiling our existing libraries in parallel, but it doesn’t support DEXing in parallel [3]. Moving to an existing build system that compiles and DEXes in parallel will buy us a faster build even without separating our libraries. We get big wins (which I will try to quantify in the near future) if we do a little more work and split our libraries further.

Simpler Proguard support.

We’re about to land Bug 1106593, which addresses the fact that we weren’t Proguarding everything we should have been. It adds two bespoke stages to our build. This type of change would be simpler if we were using an existing implementation (which would have certainly Done the Right Thing) and only had to modify a Proguard configuration.

Trivial MultiDex support.

Fennec is pushing up against the classes.dex size limit. Google recently added MultiDex support, which allows further run-time loaded classesN.dex files. Gradle exposes the additional build configuration with a handful of lines of code. Someday, we’ll probably need this support.

Support for shrinking Android resources

Gradle exposes a non-trivial build step for resource shrinking. We want it.

Support for building multiple App configurations.

By configurations, I mean building a standard release version of Fennec, and a debug (non-Proguarded) version of Fennec for JUnit 3 testing, and possibly even multiple additional versions (resource constrained; API limited) all with one "TBPL build job". Such build configurations reduce automation machine load and reduce the risk of build flags being orphaned.

There’s a similar, but limited, feature known as split APKs that we might want too.

Wins for contributors

Simpler build bootstrapping.

Gradle in particular has first class for bootstrapping itself via the Gradle wrapper. This wrapper prepares the Gradle part of the build environment automatically and reduces the time needed to get a build started.

First class dependency management.

External build systems, in particular Gradle, integrate with Maven/Ivy repositories for consuming dependencies. That means we can specify, in the tree, what versions we currently support and the build system will do the work of fetching, configuring, and installing them. It’s a faster first build and fewer frustrating dependency chases trying to figure out what changed and how your local machine is different from the official TBPL configuration.

Better IDE integration.

Gradle integrates, mostly tightly, with IDEA IntelliJ and Google’s Android Studio. This is a big win for new contributors (just open mobile/android in your IDE to build) and a huge boon for seasoned developers who get the power of IntelliJ for their daily work.

Built in support for building and publishing Javadocs for our own use.

Some parts of the code are well-commented; others are not. Perhaps we
want to document some of our trickier code flows in a public place?

Built in support for producing Android lint output.

Catches bugs! For free!

Wins for GeckoView

Existing build systems have better systems for updating only specific targets.

Right now, the moz.build based build system produces essentially all artifacts at build time. That includes test APKs, packaging the GeckoView library, an example application, etc. It’s slow! Most of the time we just want the ability to package GeckoView; we really just want a fresh Fennec APK.

Built in code to build a GeckoView AAR file

This includes first class support for publishing the artifact to a Maven repository. This would eliminate the need for https://ci.mozilla.org/job/mozilla-central-geckoview/configure.

Support for producing Java source JARs and Javadoc JARs for GeckoView.

As the GeckoView library gains consumers, providing source JARs and Javadoc JARs will make consumers lives significantly easier.

Conclusion

Help me make our builds better! Discussion is best conducted on the mobile-firefox-dev mailing list and I’m nalexander on irc.mozilla.org and @ncalexander on Twitter.

Notes

[1]Truthfully, Gradle with Google’s own rapidly developing Android-Gradle plugin. But this was the case with Ant, too: Google shipped an Ant build.xml that consumers extended and customized.
[2]That is, if Mozilla even was willing to pay a single developer to work on such a build system for a whole year. As it stands, we don’t pay a single full-time equivalent to work exclusively on any part of the build system. We get a lot of glandium’s time, a little of gps’s time, some of mshal’s time, some of my time, etc, etc.
[3]To be clear, we could support DEXing in parallel and merging the results, but it’s yet another layer of complexity in our Make-based build system. Add in supporting MultiDex and producing debug and release artifacts… let’s not.

Changes

  • Mon 05 January 2015: Thanks to @rnewman for surfacing spelling and grammatical errors.

http://www.ncalexander.net/blog/2015/01/05/we-should-build-firefox-for-android-with-an-external-build-system/


Jennifer Boriss: 8% Increase in reddit Account Registrations

Вторник, 06 Января 2015 г. 00:23 + в цитатник
Happy New Year! reddit has a lot of big plans coming up for 2015. But, let’s start with some good news. Remember that redesign we did of the login and […]

http://www.donotlick.com/2015/01/05/8-increase-in-reddit-account-registrations/


Armen Zambrano: Run Android test jobs locally

Понедельник, 05 Января 2015 г. 23:47 + в цитатник
You can now run Android test jobs on your local machine with Mozharness.

As with any other developer capable Mozharness script, all you have to do is:

  • Append --cfg developer_config.py
  • Append --installer-url and --test-url with appropriate URIs
An example for this is:
python scripts/android_emulator_unittest.py --cfg android/androidarm.py
--test-suite mochitest-gl-1 --blob-upload-branch try
--download-symbols ondemand --cfg developer_config.py
--installer-url http://ftp.mozilla.org/pub/mozilla.org/mobile/nightly/latest-mozilla-central-android-api-9/en-US/fennec-37.0a1.en-US.android-arm.apk
--test-url http://ftp.mozilla.org/pub/mozilla.org/mobile/nightly/latest-mozilla-central-android-api-9/en-US/fennec-37.0a1.en-US.android-arm.tests.zip


Here's the bug where the work happened.
Here's the documentation on how to run Mozharness as a developer.

Please file a bug under Mozharness if you find any issues.

Here are some other related blog posts:


Disclaimers

Bug 1117954- I think that I need a different SDK or emulator version is needed to run Android API 10 jobs.

I wish we run all of our jobs in proper isolation!


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

http://feedproxy.google.com/~r/armenzg_mozilla/~3/liLSogEVuwc/run-android-test-jobs-locally.html


Roberto A. Vitillo: A/B test for Telemetry histograms

Понедельник, 05 Января 2015 г. 19:28 + в цитатник

A/B tests are a simple way to determine the effect caused by a change in a software product against a baseline, i.e. version A against version B. An A/B test is essentially an experiment that indiscriminately assigns a control or experiment condition to each user. It’s an extremely effective method to ascertain causality which is hard, at best, to infer with statistical methods alone. Telemetry comes with its own A/B test implementation, Telemetry Experiments.

Depending on the type of data collected and the question asked, different statistical techniques are used to verify if there is a difference between the experiment and control version:

  1. Does the rate of success of X differ between the two versions?
  2. Does the average value of  Y differ between the two versions?
  3. Does the average time to event Z differ between the two versions?

Those are just the most commonly used methods.

The frequentist statistical hypothesis testing framework is based on a conceptually simple idea: assuming that we live in a world where a certain baseline hypothesis (null hypothesis) is valid, what’s the probability of obtaining the results we observed? If the probability is very low, i.e. under a certain threshold, we gain confidence that the effect we are seeing is genuine.

To give you a concrete example, say I have reason to believe that the average battery duration of my new phone is 5 hours but the manufacturer claims it’s 5.5 hours. If we assume the average battery has indeed a duration of 5.5 hours (null hypothesis), what’s the probability of measuring an average duration that is 30 minutes lower? If the probability is small enough, say under 5%, we “reject” the null hypothesis. Note that there are many things that can go wrong with this framework and one has to be careful in interpreting the results.

Telemetry histograms are a different beast though. Each user submits its own histogram for a certain metric, the histograms are then aggregated across all users for version A and version B. How do you determine if there is a real difference or if what you are looking at is just due to noise? A chi-squared test would seem the most natural choice but on second thought its assumptions are not met as entries in the aggregated histograms are not independent from each other. Luckily we can avoid to sit down and come up with a new mathematically sound statistical test. Meet the permutation test.

Say you have a sample of metric M for users of version A and a sample of metric M for users of version B. You measure a difference of d between the means of the samples. Now you assume there is no difference between A and B and randomly shuffle entries between the two samples and compute again the difference of the means. You do this again, and again, and again… What you end up with is a distribution D of the differences of the means for the all the reshuffled samples. Now, you compute the probability of getting the original difference d, or a more extreme value, by chance and welcome our newborn hypothesis test!

Going back to our original problem of comparing aggregated histograms for the experiment and control group, instead of having means we have aggregated histograms and instead of computing the difference we are considering the distance; everything else remains the same as in the previous example:

def mc_permutation_test(xs, ys, num):
    n, k = len(xs), 0
    h1 = xs.sum()
    h2 = ys.sum()

    diff = histogram_distance(h1, h2)
    zs = pd.concat([xs, ys])
    zs.index = np.arange(0, len(zs))

    for j in range(num):
        zs = zs.reindex(np.random.permutation(zs.index))
        h1 = zs[:n].sum()
        h2 = zs[n:].sum()
        k += diff < histogram_distance(h1, h2)

    return k / num

Most statistical tests were created in a time where there were no [fast] computers around, but nowadays churning a Monte-Carlo permutation test is not a big deal and one can easily run one in a reasonable time.


http://robertovitillo.com/2015/01/05/ab-test-for-telemetry-histograms/


Jess Klein: Shall we dance? On-boarding Webmakers

Понедельник, 05 Января 2015 г. 19:02 + в цитатник
The first time that someone comes to your website is like a high school dance at the gym. You want that hottie who you have been thinking about all year to be attracted to you and join you on the dance floor . You want to show them what you are all about: how you aren't just about the MC Hammer pants and bikini top you are wearing (dating myself much?) - and you have the moves to prove it. This dance is just the beginning - you really want to go steady, but you have to start somewhere, right?

About 40-60% of users who sign up for a free trial of your software will use it once and then never come back.

When designing the on-boarding experience, we have a few goals: 
  • We should make a positive user experience where the visitor learns something within minutes of interacting
  • We should have the user take some action which results in signing up for a Webmaker account
  • We should give the user a clear and compelling reason to return.

Deeply inspired by the theory of Hanging Out, Messing Around and Geeking Out: Kids Living and Learning with New Media, I started to think about what a low bar way might be to get people to dance with me.  The idea is that there is a progression and/or just different ways that a site visitor might interact with the site. I wanted to create an experience for the user, that will allow them to walk away having seen a little bit of code, had the a ha! moment, the realization that there is so much to learn about the way that the web is crafted - and most importantly: that remixing the web is an approachable challenge.  According to this chart below, we could argue that most of our site visitors are at the beginning of the customer awareness journey.



Start from the beginning --- err where is that exactly?

I started by doing an exploratory sketch - asking where might users first see/ interact with the Goggles on Webmaker? I see 5 main areas of contact:
  1. Webmaker Landing Page with a very specific call to action
  2. Via the not-yet-existing "Join Webmaker" button user flow
  3. On the Tools page within Webmaker
  4. On the Goggles page within Webmaker
  5. Within the Goggles interface upon activating the bookmarklet
For this heartbeat (and the build sprint after) we decided to focus on number 1 via 2 (Join Webmaker user flow via the landing page) as the goal for the first quarter is to improve our conversion of visitors to Webmaker.org into makers.

Think through the user flow

With a clear scope, I took a stab at thinking through potential user flows (ahem,dance moves). What interactions might I be able to design that could help the user gain an understanding of the awesome potential of Webmaker and come away with learning a little bit about making things on the web within the first few minutes of their site visit? On a traditional site, this is where I would do a product tour - to tell the visitor about all the bells and whistles. But, let's remember, we are at a high school dance. We don't want to just tell that hottie about how great we are, we want them to hold our hand and dance with us. So what exactly is our dance? It's an introduction to the site through an interactive tinkering activity.

I had some experience tackling this user experience challenge a few months back when I designed the Maker Party snippet for the Firefox about page. Here, we were trying to coax visitors to the About Page to sign up for Webmaker AND ... (the cooler part) expose them to a little bit of code through modeling a playful interaction that they in turn would emulate. We found this approach to be successful. I personally user tested the page with a variety of site visitors in the Hive Learning Network and found that the animated modeling of the CSS value being typed acted as I would as a teacher in a classroom, or a friend showing someone how to approach the problem, asking the friend to try it out themselves. This approach could easily translate to an activity on the landing page where we show a visitor how to edit some playfully placed text using the X-Ray Goggles.

Approach 1: Modeling
Modeling tries to emulate the way you might teach this in a classroom environment - you show the actions that you want the learner to emulate.  See complete mockup here.



I also tackled this challenge of getting a user to dabble with new information and content in the weather activity experiment for the Hour of Code. Here, I thought about how I like to follow recipes and get feedback as I do each each step in a staged progression. (This would be like... someone teaching you how to do the macarena step by step at the dance)

Approach 2: Stage Progression
The staged progression allows the user to read, and then asks them to try it out, providing little tips along the way. See complete mockup here.




After getting some feedback from my colleagues and a few user testers I am leaning towards a hybrid approach - where you might model for them at each "step."

Next up: enticing your friend to get on the dance floor

All of the user flows and interaction designs are a good exercise, but if the icebreaker prompt isn't enticing, then it's no good.  So - I did a few iterations:


Name tag fill in the blank --- this could somehow tie in to the sign up flow.



Venn Diagrams - probably too designerdy but I couldn't help myself.


Fill in the blank - I <3 webmaking.="">




Fill in the blank - attempt 2. I like this one the most at the moment because it has a focal point, and it feels a bit disruptive, like Webmaker itself.

Next up: Finding those dancing shoes.
To get to an interactive prototype, we need to:
  • Design the hybrid interaction design (modeling + staged progression)
  • Choose a direction and then work on the UI elements
  • Wordsmith the copy.
  • User test with real humans!

Designing an on-boarding is like asking someone to the dance floor ----testing if your pits stink and all, so I would love to hear any thoughts if I've got any moves. 

http://jessicaklein.blogspot.com/2015/01/shall-we-dance-on-boarding-webmakers.html


Will Kahn-Greene: ElasticUtils: I'm stepping down and deprecating the project

Понедельник, 05 Января 2015 г. 18:40 + в цитатник

What is^Wwas it?

ElasticUtils is^Wwas a Python library for building and executing Elasticsearch searches.

See the Quickstart for more details.

I'm taking my ball and going home

For the last few years, I've maintained ElasticUtils. It's a project I picked up after Jeff, Dave and Erik left.

Since I picked it up, we've had 13 or so releases. We switched the underlying library from pyes to pyelasticsearch and then to elasticsearch-py. We added support for a variety of Elasticsearch versions up to about 1.0. We overhauled the documentation. We overhauled the test suite. We generalized it from just a Django library into a library with a Django extension. We added MappingTypes. We worked really hard to make it backwards compatible from version to version. It's been a lot of work, but mostly in spurts.

Since the beginning of the project in May 2011 (a year before I picked up maintenance), we've had 33 people contribute patches. We've had many more point out issues, ask questions on the #elasticutils IRC channel, send me email about this and that.

That's a good run for an open source project. There are a bunch of things I wish I had done differently when I was at the helm, but it's a good run nonetheless.

The current state of things, however, is not great. ElasticUtils has a ton of issues. Lots of technical debt accrued over the years, several architectural decisions that turned out to suck and have obnoxious consequences, lack of support for new features in Elasticsearch > 1.0, etc. It'll take a lot of work to clean that up. Plus it's got a CamelCase name and that's so pass'e.

At PyCon 2014, Rob, Jannis and I worked with Honza on the API for the library that is now elasticsearch-dsl-py. This library has several of the things I like about ElasticUtils. It's a project that's being supported by the Elasticsearch folks. It's got momentum. It supports many of the Elasticsearch > 1.0 features. There are several libraries that sit on top of elasticsearch-dsl-py as Django shims now.

ElasticUtils is at a point where it's got a lot of problems and there are good alternatives that are better supported.

Thus, this is an excellent time for me to step down as maintainer. Going forward from today, I won't be doing any more development or maintenance.

Further, I'm deprecating the project because no one should be using an unmaintained broken project when there are better supported alternatives. That way lies madness!

So, if you're using ElasticUtils, what do you do now?

ElasticUtils 0.10.2 has the things you need to bridge from Elasticsearch 0.90 to 1.x. You could upgrade to that version and then switch to elasticsearch-dsl-py or one of the Django shims.

On that note, so long ElasticUtils and thank you to everyone who's helped on and used ElasticUtils over the years!

http://bluesock.org/~willkg/blog/dev/elasticutils/elasticutils.stepping_down


Ben Kero: Finding the perfect ancillary travel device

Понедельник, 05 Января 2015 г. 15:23 + в цитатник
Hackerbeach attendees at the upper dining tableHackerbeach attendees at the upper dining table

As would be familiar to anybody who knows me, I’m always interested in new tech, especially when it’s running free software and portable enough to be in my every-day carry arsenal.

For the past month or so I’ve been looking at a few devices as a secondary to my laptop to carry with me. In a few weeks I’ll be joining those already there at third installment of Hackerbeach, on the Caribbean island of Dominica.

I wanted an embedded Linux system that could do everything. The scope of this device just kept getting bigger the more I thought about it.

It should:

  • Fit in my hand
  • Be battery powered, and can charge USB devices
  • Wireless hotspot
  • Local NPM mirror and transparent HTTP cache
  • Serve Music/Videos from a 2.5'' 2TB hard drive
  • Provide multiple SSIDs with one acting as a stable VPN bridge to route traffic through a colocated server.

I originally was looking at 3d printing an enclosure around a board like a Raspberry Pi, then duct taping a USB battery and 2.5” hard drive to it.

I didn't want to bring this through TSA security.I didn’t want to bring this through TSA security.

Over time the scope of this increased. One of the fond memories of last Hackerbeach was when we all huddled around a darkened wall in our fort to enjoy a movie. We had borrowed a small projector and speaker from one of the beach-front restaurants, as they didn’t need it because it was low season and they were not using it to perform any showings. We cobbled together a crude mount out of a table and various smartphones to angle the projector. The power was provided by a power-strip doing double duty as an extension cable *JUST* long enough that it could only work if it was suspended from a dining chair.

A Power-to-ethernet/Wifi adapter at Hackerbeach last yearA Power-to-ethernet/Wifi adapter at Hackerbeach last year

This year I wanted to make sure we have a good setup in case we want to do this again. A few years ago I had bought a ShowWX laser-based pico projector, but upon examining it again I found that the battery was dead and it only output 10 lumen of light. That would make viewing in anything but pure darkness less than pleasant.

http://upload.wikimedia.org/wikipedia/commons/f/f0/Pico_Proyector_Showwx_de_Microvision.jpgMicrovision ShowWX laser picoprojector. 10 whopping lumen.

While looking for separate tiny projectors (AAXA seems to have the market cornered) I was looking at the hardware that could power these.

After I decided to look what else was out there besides my Raspberry Pi concoction I found a Cubietruck combined with an Ewell case. This combination included space for a 2.5” SATA drive, a 5300mAh Lithium polymer battery and integrated charging circuitry, built in Wifi/Bluetooth, and HDMI output to connect to a projector.

http://eleduino.com/admin/uploads////1387469764the%20Ewell%20case%20for%20CT-1.jpgCubietruck Ewell case. This package has the board + 2.5'' hard drive + Lithium polymer battery

Unfortunately from the sites I saw the case looks too big for my 19L backpack. Although I haven’t physically held a unit yet, there seems to be quite a bit of empty space inside the case. I’d like something exactly like this but in a smaller form factor. For my spartan carrying patterns this simply won’t work.

I hunted through more single-board ARM computers to find alternatives. Unfortunately none really come close. They’re all lacking some important feature.

After a while I decided to investigate having a computer with a projector built in. If I could find something that could integrate everything above AND a projector, I could save even more space. This would allow me to bring movies, and combined with a few compact Bluetooth controllers (NES30s) would allow us to play video games together.

http://gearhungry.com/wp-content/uploads/2014/07/8Bitdo-NES30-Controller.jpgNES30 Bluetooth Gamepad

This led me to a few solutions, each with compromises. The first I discovered was a Gigabyte PC/Projector. This little beauty is basically an Intel NUC with a 848x480 projector built into it. Unfortunately it doesn’t support a 2.5” hard drive, limiting it only to an MSATA SSD. Likewise it doesn’t come with a battery and draws 40W at peak consumption. Holy power consumption, batman!

http://hothardware.com/articleimages/Item2185/small_gigabyte-brix-1.jpgGigabyte PC Projector. Might consume more energy-per-space than a gasoline engine.

After searching around a little more I came across a Liliputing article about the ZTE LivePro Android-based projector. This device seems to be a compromise, but is still the closest thing to what I want. It has an integrated 75-lumen LED projector with some 2013-era smartphone internals running Android 4.2. It features a 5Ah battery that can be used to charge USB devices, supports a (supposedly superlaggy) Wireless Display, and HDMI input with an assuredly god-awful 1W speaker. Likewise, it charges with a 12V adapter, which seems like a giant pain in the ass, but something entirely solvable.

http://www.androidcentral.com/sites/androidcentral.com/files/styles/large/public/article_images/2015/01/zte-spro.jpg?itok=rd_-9FDSZTE S-Pro. A microprojector with Android smartphone internals

Not terribly impressive, but I have a feeling that I can make it work for most of the requirements listed above, and if I can’t I can always put some leg work in to root it and run a GNU/Linux userland on it.

This strange device was demoed at the 2013 CES and eventually picked up by Sprint to be sold at a contract-tied discount. Unfortunately none are presently up on eBay, presumably because nobody in their right mind bought one.

The good news for crazy people like me is that a cellular-lacking version called the ZTE S-Pro is under preorder and due to be released January 6th. I’ve preordered one on Amazon Prime, and hope to return home to a delivered version when I’m done with linux.conf.au responsibilities.

This solution should be fairly compact. With this solution I should only need to bring:

  • ZTE S-Pro Android Projector. Rooted with XBMC + Emulators
  • 2TB External 2.5'' drive
  • 2 or 4 NES30 Bluetooth Gamepads for emulated games
  • Jambox for decent Bluetooth audio

On a scale of walnuts to meatballs, how crazy am I for wanting this, and for figuring out how to do it?

http://bke.ro/?p=409


Kartikaya Gupta: Firewalling for fun and safety

Понедельник, 05 Января 2015 г. 05:17 + в цитатник

TL;DR: If you have a home wi-fi network, think about setting multiple separate VLANs as a "defense in depth" technique to protect hosts from malware.

The long version: A few years ago when I last needed to get a router, I got one which came with DD-WRT out of the box (made by Buffalo). I got it because DD-WRT (and Tomato) were all the rage back then and I wanted to try it out. While I was setting it up I noticed I could set up multiple Wi-Fi SSIDs on my home network, each with different authentication parameters. So I decided to create two - one for my own use (WPA2 encrypted) and one for guests (with a hidden SSID and no encryption). That way when somebody came over and wanted to use my Wi-Fi I could just give them the (hidden) SSID name and they would be able to connect without a password.

This turned out to a pretty good idea and served me well. Since then though I've acquired many more devices that also need Wi-Fi access and in the interest of security I've made my setup a little more complex. Consider the webcam I bought a few months ago. It shipped from somewhere in China and comes with software that I totally don't trust. Not only is it not open-source, it's not upgradeable and regularly tries to talk to some Amazon EC2 server. It would be pretty bad if malware managed to infect the webcam and not only used it to spy on me, but also used as a staging area to attack other devices on my network.

(Aside: most people with home Wi-Fi networks implicitly treat the router as a firewall, in that random devices outside the network can't directly connect to devices inside the network. For the most part this is true, but of course it's not hard for a persistent attacker to do periodic port scans to see if there are any hosts inside your network listening for connections via UPnP or whatever, and use that as an entrance vector if the service has vulnerabilities.)

Anyway, back to the webcam. I ended up only allowing it connect to an isolated Wi-Fi network and used firewall rules on the router to prevent all access to or from it, except to a single server which could access a single port on it. That server basically extracted the webcam feed and exposed it to the rest of my network. Doing this isn't a perfect solution but it adds a layer of security that makes it harder for malware to penetrate.

There's a ton of other Wi-Fi devices on my network - a printer, various smartphones, a couple of Sonos devices, and so on. As the "Internet of Things" grows this list is bound to grow as well. If you care about ensuring the security of machines on your network, and not letting become part of some random hacker's botnet, knowing how to turn your router into a full-fledged firewall is a very useful tool indeed. Even if you choose not to lock things down to the extent that I do, simply monitoring connections between devices inside your network and hosts outside your network can be a huge help.

https://staktrace.com/spout/entry.php?id=832


Toni Hermoso Pulido: Mozilla Janus proxy in a Raspberry Pi

Воскресенье, 04 Января 2015 г. 12:33 + в цитатник

These winter holidays I've been taking care of my home network and I finally found some time to set up a Raspberry Pi for my everyday Internet needs.

In the process of checking different proxy services I stumbled on Janus, a Node.js SPDY-based proxy. According to this blogpost, the service seems to be targeting mobile contexts, somehow providing a similar service as Opera Turbo or more recent Google Data Compression Proxy. But, compared to these latter options, Janus server is fully open-source. Moreover, as an interesting optional feature, it's also possible to block ads based on this ad list provider.

You can already try the service by yourself by using last Firefox version and this addon (for Firefox and Firefox mobile) and the default proxy server.

However, since I wanted to manage my own proxy service (for privacy and performance reasons), I decided to install the proxy server in a Raspberry Pi of my own.

Since Janus is a Node.js application, you need first to set up a working environment, for instance with tools like nvm. For required libraries, such as mozjpeg, some extra packages may be needed, easily installable with 'apt-get', such as: libtool or libpng-dev. If module building fails, you're likely to need another development package to be installed.

Proxy configuration can be set up via config/default.yml file. For instance, proxy port can be changed there and several other options, such as logging or enabling ad blocking by default. A recommendable feature is enabling Redis database for caching purposes (take into account that Redis default port is 6379, so you may want to change the value provided there).

Once set up, a PAC URL can be found at: https://yourRaspberryIP:55055 (unless you changed the default port). By the way, notice the 'https' procol when you type the address.

And that's all. Since there is no authentication mechanism so far included, you have to be careful if you plan to open the proxy port from your home network to the Internet (I didn't try whether HTTP authentication behind a reverse proxy might work).

As a side note, from my short experience, even though the addon allows you to tune whether you prefer bandwidth save or lower latency, I must admit that Raspberry Pi might still not be the most suitable machine for this work and a more powerful single board computer (let's say a Cubietruck) might do a better job.

https://www.cau.cat/blog/mozilla_janus_proxy_in_a_raspberry_pi


John O'Duinn: RelEngCon 2015: Call for papers

Воскресенье, 04 Января 2015 г. 11:04 + в цитатник

Preparations for RelEngConf 2015 are officially in full swing. This means two things:

1) RelEngCon 2015 is now accepting proposals for talks/sessions. If you have a good industry-related or academic-focused topic in the area of Release Engineering, please have a look at the Release Engineering conference guidelines, and submit your proposal before the deadline of 23-jan-2015.

2) Both RelEngCon 2014 and RelEngCon 2013 were great. The mixture of attendees and speakers, from academia and battle-hardened industry, made for some riveting topics and side discussions. Its too early to tell who exactly will be speaking in 2015, but its not too early to start planning your travel to Florence, Italy!! Also of note: RelEngCon 2015 will be just a few weeks after the publication of IEEE 1st Special Issue on Release Engineering. Looks like RelEngConf 2015 is going to be special also.

For further details about the conference, or submitting proposals, see http://releng.polymtl.ca/RELENG2015/html/index.html. If you build software delivery pipelines for your company, or if you work in a software company that has software delivery needs, I recommend you follow @relengcon, block off May 19th, 2015 on your calendar and book now. It will be well worth your time.

See you there!
John.

http://oduinn.com/blog/2015/01/04/relengcon-2015-call-for-papers/


Pascal Chevrel: My Q3/Q4-2014 report

Воскресенье, 04 Января 2015 г. 05:05 + в цитатник

I didn't write the Q3 report, so this report covers both Q3 and Q4.

Regular l10n-drivers work such as patch reviews, pushes to production, meetings and past projects maintenance excluded.

Transvision

New release

We released version 3.5 of Transvision and I worked on these features and bug fixes:

  • Improved the setup process for devs, just launch start.sh after cloning the repository and it will set up everything and launch a Transvision instance locally. Setting up Transvision locally for development takes now about 20mn with a fast DSL connection.
  • Fixed a bunch of regressions in our external API that our Galician localizer Enrique Est'evez, aka Keko, reported.
  • Worked with Keko on adding a new translation memory export format which allows importing an existing Mozilla localization effort supported by Transvision into OmegaT (an offline localization tool in Java).
  • Worked with my intern Th'eo on his Locale status view, our numbers are mostly consistent with the official l10n dashboard but with a different aggregation approach that will allow to expose strings by product and category (/ex, expose the status of devtools localization in Firefox, or group Firefox strings with Firefox content hosted on mozilla.org). Still a lot to do on the perf side for this page (http://transvision.mozfr.org/showrepos/) since it relies heavily on regexes on 10s of thousands of strings (page load is about 15s without caching, 4s with caching). Th'eo will do another internship next summer with us, so stay tuned.
  • Improved extraction of strings to not escape systematically special characters and html at string extraction. That allows searching for strings with html markup or reserved characters in HTML (for example: &BrandShortName or searching for \n codes in strings.
  • All commits to master are now continuously integrated on the beta site via GitHub webhooks (beta site had to be manually updated before)
  • Many smaller improvements to exiting views

Work in progress related to Transvision

  • Created a standalone library to be used initially in Transvision to extract all changesets from hg/git/svn repositories in the same format, so as to be able to dig contribution data in a uniform way
  • Working on an internal directory built dynamically from our logs, coupled with the vcs library it will allow us to provide our core localizers stats to the CBT team via an external API. Also works as a CLI tool for my own use.
  • Working on a Product class for Transvision that allows to query strings per software and not per repository (important for desktop because there are a lot of strings shared between Firefox and Thunderbird).

Overall, I worked a lot on Transvision during Q3 but very little during Q4. There was a good balance of improvements targetting our different users (localizers, Transvision devs, API consumers) and we did some solid improvements to both the code and our workflow to ship faster in 2015. We also had 8 people contributing to the codebase over the semester and several people popping in our IRC channel asking about the proejct, which is good. I have several big patches with good potential in branches but unfortunately I didn't have time to work on finishing them in Q4 and it seems I won't be able to work on them either in Q1, maybe in Q2.

Langchecker and other tracking tools we use for management

I found a way to generate data about web parts localization over time from Langchecker (Langchecker was not meant to do that, so I had to be creative), so now we generate graphs that show the history of web parts completion per locale: example with Bulgarian showing what happens when you have a new high performer localizer joining Mozilla

I updated the overview status showing the number of untranslated strings per locales with a summary ranking locales status and also showing locales on locamotion it also now counts strings for all projects on the wbedashboard, not just mozilla.org.

I added the number of strings, words and files to the summary view since this is something we got asked for often from other teams.

Google Play copy localization back into a normal l10n process

Historically, the process of publishing updates to the Firefox page on Google Play has been owned by many different people, most if not all of them no longer working for Mozilla. So there was no real formal process to update the copy and get that copy translated. Sylvestre Ledru (Debian core dev that recently joined the Mozilla Release Management team) decided to establish a process with as much automation as possible via the Google Play API to update our copy in English when we ship a new release and I decided to also help on this front to get that content localized and published simultaneaously with English.

So now the process of translating the copy is back to the l10n-drivers team with our own tools (means tracking, dashboards, integration to the translation platforms we use…).

I created a small app to track and QA translations without pushing to Google Play.

And I also created an associated JSON API that release drivers can use.

I am working with Sylvestre on getting release drivers tools automatically update our localized listings along with the en-US copy. We hope to get that working and functionnal in Q1 and be able to always have an updated copy for all of our supported locale for Firefox on Google Play from now on.

Thunderbird web localization migrating to Bedrock (mozilla.org)

This is localization management work that I am doing as a volunteer as Mozilla does not put paid resources on Thunderbird. My colleague Flod is also helping, as well as Th'eo (my intern and French localizer) and Kohei Yoshino (ex Mozilla Japan webdev and ex mozilla.org localizer, now living in Canada and still contributing to www.mozilla.org at the code level).

Thunderbird web content is still hosted on the old mozilla.org framework, this content is not actively maintained by anybody and that framework will eventually disappear. The l10n process is also broken there. For Thunderbird 10th anniversary, I volunteered to help manage the key content for Thunderbird on our current platform (Bedrock) so as that Thunderbird in-product pages can get updated and translated. If you use Thunderbird, you may have seen the new version of the start page (some people noticed).

A few notes though: * If you are a Mozilla localizer but are not interested in translating Thunderbird content, please don't translate it even if you see that on your locale dashboard, help us find a person to do these translations for Thunderbird in you language instead! * I am not volunteering to manage Thunderbird product localization, I just don't have enough time for that. If you want to help on that front, please contact the new Thunderbird team leaders, they will be happy to get help on that front! * 23 locales have translated the page, thanks to you guys! If your locale is not done and you want to get involved in Mozilla Web localization, please contact me! (pascal AT mozilla DOT com)

Community building

I try to spend as much time as I can actively finding new localizers and growing our localization community with core contributors that help us ship software and projects. It's great to build tools and we are always happy when we can improve by 20% the productivity of a localizer with a new process or tool, but when we find a new core localizer for a language, we get a lot more than a 20% productivity boost ;). After all, we build tools for people to use them.

Here is a list of countries where I found new core localizers:

Portugal:
  • We now have a webmaster for mozilla.pt (hosted on mozfr.org), Joao, a Portuguese volunteer involved in the Rust language
  • Luis Mour~ao joined the team as a technical localizer
  • Claudio Esperanca, is now an svn committer
Bulgaria
  • New Bulgarian web localizer, Miroslav Yovchev. Miroslav did an awesome job with one of his friends getting mozilla.org which was very outdated in Bulgarian in a decent shape. He also corrected many existing pages for consistency of style.
  • I met with my colleague Pavel Ivanov in Portland who is Bulgarian and works on Gaia so as that we can see how we can help grow the Bulgarian community next year, not just in l10n.
Latvia

New Latvian localizer for webparts: Janis Marks Gailis. Latvian does not have many speakers and historically we had only one localizer (Raivis Deijus) focused on products, so now we can say that we have a team spanning both products and web sites. A third person would be welcome of course :)

Croatia

New Croatian localizer Antun Koncic. Antun used to be the Croatian dictionary packager for Thunderbird and was no longer involved in Mozilla, welcome back then!

Poland
  • We have a new Polish localizer, Piotr Nalepa that is helping us with web content
  • Joanna Mazgaj, long term Polish contributor in aviary.pl is also now a committer for mozilla.org

We could use the help of a couple more people for Polish, especially to translate products and technical content, if you are interested, please leave a comment :)

Events I attended

I went to the Encontro ib'erico de tradutores de software libre a linguas minorizadas in Santiago de Compostela, Spain. I met there several of our Mozilla localizers for Catalan, Galician, Basque, Aragonese and Asturian. We talked about processes, tools and community building. Was very interesting and great fun.

Like a thousand more Mozilla employees and many volunteers, I spent a work week in Portland, USA where I met several localizers we had invited. It was the first time I was meeting some of these localizers IRL, in particular people from India and Bangladesh. That was nice.

Privacy Coach Add-on

In parallel to the 10 years of Mozilla, we created and launched a Firefox for Android extension called Privacy Coach teaching users how to set up privacy settings in their browser.

I worked with Margaret Leibovic on that project and I reused the code and process that I had used for the add-ons we had created during the Australis launch earlier this year to translate the add-on while it was being developped. The idea is that it shouldn't be discruptive for either localizers or developers.

We worked with .lang files that were automatically converted to the right set of dtd/properties files for the extension along with the packaging of the locale into the extension (update of install.rdf/chrome.manifest/locale folder) that were integrated into the extenson via pull requests on github during its development. We got 12 locales done which is more than the 8 locales initially planned. Two locales we wanted to get we didn't in the end (Hungarian and Indonesian) so they are part of the locales I will work with in Q1 to get more people.

Various Web projects

  • Firefox Tiles: prepared the project as a .lang one with Mathjazz, prepopulated translations and launched with Mathjazz for all locales
  • Worked with flod on New home page and new contribute pages for mozilla.org
  • 10 years of Mozilla: Flod was the lead on this project and blogged about it in his own Q4 report, basically my main role there was to help find localizers for the project, all the merit of the project from the l10n POV goes to flod :)
  • Removed 9 locales no longer supported in our products from mozilla.org (ak, csb, lg, mn, nso, sah, sw, ta-LK, wo)

Other

  • I created a GitHub organization for our team and moved several of my projects there. I felt that we needed to have a place for us to have all the tools we use for shipping in one common place where we could create projects whenever we need them instead of using our personal accounts.
  • I transferred the womoz.org domain to Flore Allemandou now leading the Women in Mozilla community (I had been paying for it since 2009) and Flore is working on getting the domain owned and paid by Mozilla
  • I helped the French community (with Th'eo and Natalia from the Marketing team) set up a promotional Firefox OS site focused on end-users (created the github project, did a temporary version of the site until the French community built a design and content to replace it and took care of setting up some continous integration to allow people to publish code and text updates without me merging and pushing to production).

http://www.chevrel.org/carnet/?post/2015/01/03/My-Q3/Q4-2014-report



Поиск сообщений в rss_planet_mozilla
Страницы: 472 ... 110 109 [108] 107 106 ..
.. 1 Календарь