Asa Dotzler: Foxtrot Update Jan 2015 |
Hey folks. Welcome to 2015!
About half of the people participating in the Foxtrot program have still not received Flame phones. I’m sorry about that. I’m hard at work on this but it depends on getting the right builds on the phones. We aren’t going to send our Foxtrot testers new phones with builds that brick the device randomly and we don’t yet have the process set up to deliver only functional updates.
We have delivered a couple thousand Flames to our very brave “unstable nightly” testers and developers, and thousands of others have purchased Flames to develop against, but we’re still behind on getting our “stable nightly” Foxtrot builds and update channel ready. As soon as that’s are ready, we’ll flash the remaining 100 or so Foxtrot phones and send them out to contributors.
I don’t have an ETA, but it’s a high priority.
(If you have not received an email update on Foxtrot, check your spam filter. All Foxtrot participants were automatically signed up for a mailing list for just this kind of update and information. Unfortunately, it appears that some people have removed themselves from the mailing list and others are not seeing the mails because they are ending up in a spam filter. If you removed yourself from the list, I dropped your entry from the Foxtrot program with the assumption that you were no longer interested. If you remember removing yourself from the mailing list but didn’t want to be removed from the program, you can email me and I’ll get you added back. The list is not optional though. It’s how we’ll contact you for program feedback.)
Also, it is very likely that we’ll be expanding the program to several hundred more people over the coming weeks. Stay tuned here for updates on how you can join.
|
Code Simplicity: How to Handle Code Complexity in a Software Company |
Here’s an obvious statement that has some subtle consequences:
Only an individual programmer can resolve code complexity.
That is, resolving code complexity requires the attention of an individual person on that code. They can certainly use appropriate tools to make the task easier, but ultimately it’s the application of human intelligence, attention, and work that simplifies code.
So what? Why does this matter? Well, to be clearer:
Resolving code complexity usually requires detailed work at the level of the individual contributor.
If a manager just says “simplify the code!” and leaves it at that, usually nothing happens, because (a) they’re not being specific enough, (b) they don’t necessarily have the knowledge required about each individual piece of code in order to be that specific, and (c) part of understanding the problem is actually going through the process of solving it, and the manager isn’t the person writing the solution.
The higher a manager’s level in the company, the more true this is. When a CTO, Vice President, or Engineering Director gives an instruction like “improve code quality” but doesn’t get much more specific than that, what tends to happen is that a lot of motion occurs in the company but the codebase doesn’t significantly improve.
It’s very tempting, if you’re a software engineering manager, to propose broad, sweeping solutions to problems that affect large areas. The problem with that approach to code complexity is that the problem is usually composed of many different small projects that require detailed work from individual programmers. So, if you try to handle everything with the same broad solution, that solution won’t fit most of the situations that need to be handled. Your attempt at a broad solution will actually backfire, with software engineers feeling like they did a lot of work but didn’t actually produce a maintainable, simple codebase. (This is a common pattern in software management, and it contributes to the mistaken belief that code complexity is inevitable and nothing can be done about it.)
So what can you do as a manager, if you have a complex codebase and want to resolve it? Well, the trick is to get the data from the individual contributors and then work with them to help them resolve the issues. The sequence goes roughly like this:
Each individual software engineer should write their own list. I wouldn’t recommend implementing some system for collecting the lists—just have people write down the issues for themselves in whatever way is easiest for them. Give them a few days to write this list; they might think of other things over time.
The list doesn’t just have to be about your own codebase, but can be about any code that the developer has to work with or use.
You’re looking for symptoms at this point, not causes. Developers can be as general or as specific as they want, for this list.
In this meeting you want to go over the lists and get the name of a specific directory, file, class, method, or block of code to associate with each symptom. Even if somebody says something like, “The whole codebase has no unit tests,” then you might say, “Tell me about a specific time that that affected you,” and use the response to that to narrow down what files it’s most important to write unit tests for right away. You also want to be sure that you’re really getting a description of the problem, which might be something more like “It’s difficult to refactor the codebase because I don’t know if I’m breaking other people’s modules.” Then unit tests might be the solution, but you first want to narrow down specifically where the problem lies, as much as possible. (It’s true that almost all code should be unit tested, but if you don’t have any unit tests, you’ll need to start off with some doable task on the subject.)
In general, the idea here is that only code can actually be fixed, so you have to know what piece of code is the problem. It might be true that there’s a broad problem, but that problem can be broken down into specific problems with specific pieces of code that are affected, one by one.
If a solution was suggested during the meeting, you can note that in the bug, but the bug itself should primarily be about the problem.
That said, sometimes issues have an order that they should be resolved in that is not directly related to their severity. For example, Issue X has to be resolved before Issue Y can be resolved, or resolving Issue A would make resolving Issue B easier. This means that Issue A and Issue X should be fixed first even if they’re not as severe as the issues that they block. Often, there’s a chain of issues like this and the trick is to find the issue at the bottom of the stack. Handling this part of prioritization incorrectly is one of the most common and major mistakes in software design. It may seem like a minor detail, but in fact it is critical to the success of efforts to resolve complexity. The essence of good software design in all situations is taking the right actions in the right sequence. Forcing developers to tackle issues out of sequence (without regard for which problems underlie which other problems) will cause code complexity.
This part of prioritization is a technical task that is usually best done by the technical lead of the team. Sometimes this is a manager, but other times it’s a senior software engineer.
Sometimes you don’t really know which issue to tackle first until you’re doing development on one piece of code and you discover that it would be easier to fix a different piece of code first. With that said, if you can determine the ordering up front, it’s good to do so. But if you find that you’d have to get into actually figuring out solutions in order to determine the ordering, just skip it for now.
Whether you do it up front or during development, it’s important that individual programmers do realize when there is an underlying task to tackle before the one they have been assigned. They must be empowered to switch from their current task to the one that actually blocks them. There is a limit to this (for example, rewriting the whole system into another language just to fix one file is not a good use of time) but generally, “finding the issue at the bottom of the stack” is one of the most important tasks a developer has when doing these sorts of cleanups.
One tricky piece here is that some of the bugs might be about code that isn’t maintained by your team. In that case you’ll have to work appropriately through the organization to get the appropriate team to take responsibility for the issue. It helps to have buy-in from a manager that you have in common with the other team, higher up the chain, here.
In some organizations, if the other team’s problem is not too complex or detailed, it might also be possible for your team to just make the changes themselves. This is a judgment call that you can make based on what you think is best for overall productivity.
If your team makes plans for a period of time like a quarter or six weeks, you should include some of the code cleanups in every plan. The best way to do this is to have developers first do cleanups that would make their specific feature work easier, and then have them do that feature work. Usually this doesn’t even slow down their feature work overall. (That is, if this is done correctly, developers can usually accomplish the same amount of feature work in a quarter that they could even if they weren’t also doing code cleanups, providing evidence that the code cleanups are already improving productivity.)
Don’t stop normal feature development entirely to just work on code quality. Instead, make sure that enough code quality work is being done continuously that the codebase’s quality is always improving overall rather than getting worse over time.
If you do those things, that should get you well on the road to an actually-improving codebase. There’s actually quite a bit to know about this process in general—perhaps enough for another entire book. However, the above plus some common sense and experience should be enough to make major improvements in the quality of your codebase, and perhaps even improve your life as a software engineer or manager, too.
-Max
P.S. If you do find yourself wanting more help on it, I’d be happy to come speak at your company. Just let me know.
|
Doug Belshaw: Radical participation: a sm"orgasbord |
Today and tomorrow I’m at Durham University’s eLearning conference. I’m talking on Radical Participation – inspired, in part, by Mark Surman’s presentation at the Mozilla coincidental workweek last month.
My slides should appear below. If not, click here!
I was very impressed by Abbi Flint’s keynote going into the detail of her co-authored Higher Education Academy report entitled Engagement Through Partnership: students as partners in learning and teaching in higher education. In fact, I had to alter what I was going to say as she covered my critique! Marvellous.
After Abbi’s keynote I was involved in a panel session. I didn’t stick too closely to my notes, instead giving more of a preview to what I’m talking about in my keynote tomorrow. As ever, I’m genuinely looking forward to some hard questions!
http://dougbelshaw.com/blog/2015/01/06/radical-participation/
|
Armen Zambrano: Tooltool fetching can now use LDAP credentials from a file |
|
Jorge Villalobos: Interview with Extension.Zone |
I was recently approached by Extension.Zone for an interview. I was pleasantly surprised to see a new website dedicated to browser-agnostic reporting of add-ons. Then I was just plain surprised that .zone
is now a thing.
Anyway, the interview is up here. There are some interesting questions about what makes the Firefox add-on ecosystem different than others, and what I think is an under-explored area in add-on development.
http://xulforge.com/blog/2015/01/interview-with-extension-zone/
|
Marco Zehe: Apple are losing their edge also in accessibility quality |
Over the past couple of days, a number of well-known members in the Apple community raised their voices in concern about Apple’s general decline in software quality. Marco Arment (former “Mr. Instapaper” and now “Mr. Overcast”) started out by saying that Apple has lost the functional high ground. John Gruber of Daring Fireball slightly disagrees, but says that Apple have created a perception that “Other people’s stuff doesn’t work, but Apple’s stuff doesn’t work, either”. And finally, Dr. Drang looks at the power of leverage in this context. And now, well, here is my take on the subject.
Some long-standing readers of this blog may recall this post I wrote in June of 2009 about my first experience using an iPhone. It was the first time I interacted with a touch screen device that was accessible to me as a blind user.
For several years to come, Apple would lead in terms of including accessibility features into both its mobile and desktop operating systems. Zoom had already been there when VoiceOver was introduced in iOS 3.0, and what followed were features for people with varying disabilities and special needs. Assistive Touch, which allows gestures to be performed differently, mono audio and integration with hearing aids, sub titling, audio description and other media accessibility enhancements, Guided Access for people with attention deficiencies, Siri, and most recently, Braille input directly on the touch screen in various languages and grades. Especially on iOS, VoiceOver and the other accessibility features received updates every year with every major release, and new features were added.
In the beginning, especially in Snow Leopard and Lion, Apple also did the same for OS X. It gradually also added many of the features it had added to iOS to OS X to keep them in sync. But ever since Mountain Lion, VoiceOver did not see much improvement any more. In fact, the lack of newly introduced features could lead one to the perception that Apple thinks that VoiceOver is done, and no new features need to be added.
But, and I haven’t said this for the first time on this blog, the quality of existing features is steadily declining, too. In fact, with the release of both OS X 10.10 “Yosemite” and iOS 8, the quality of many accessibility features has reached a new all-time low. AppleVis has a great summary of current problems in iOS 8. But let me give you two examples.
The first problem is so obvious and easily reproducible that it is hard to imagine Apple’s quality assurance engineers didn’t catch this, and that is on the iPhone in Safari, when going back from one page to the previous one with the Back button. When VoiceOver is running, I haven’t found a single page where this simple action did not trigger a freeze in Safari and VoiceOver. This was in early betas of iOS 8, and it is still not fixed in the 8.1.2 release several months later.
The second example concerns using Safari (again) with VoiceOver, but this time on the iPad. Using Safari itself, or any application that uses one of the two WebView components, I am reliably able to trigger a full restart of the iPad at least twice a day, most days even more often. That causes all apps to quit, sometimes without being able to save their stuff, it interrupts work, and it leaves the iPad in a semi-unstable state that it is better to fully shut it down and restart it fresh.
“Wait”, you might say, “this sounds like a problem from iOS 7 days, and wasn’t it fixed?” Yes, I would reply, it was, but it returned in full force in iOS 8. But mostly on the iPad. I think I’ve only seen one or two restarts on my iPhone since iOS 8 came out.
The first of these two examples is such low-hanging fruit that I, if I was working at Apple, would be deeply ashamed that this is still around. The second one is harder, but not so hard that an engineer sitting down for a day and using the device with VoiceOver enabled wouldn’t run into it.
And now back to Yosemite. I again concentrate on Safari + VoiceOver, since this is where I spend a lot of my time. Support has regressed so badly especially on dynamic pages that it is barely possible to use Facebook on Yosemite with VoiceOver. VoiceOver skips over whole stories, loses focus, and does all sorts of other funky stuff. And no, not even the newest public beta of 10.10.2, which is supposed to contain VoiceOver fixes, addresses these problems. Moreover, editing in any form field on the web is so slow and double-speaks that it is not really possible to do productive work there. And if you have a braille display connected, expect it to drop out every few seconds when moving the cursor. The sounds VoiceOver makes are the equivalent of plugging and unplugging a USB braille display every 3 to 4 seconds.
All of these problems have been reported to Apple, some by multiple users. They were tweeted about publicly, and now I am reiterating over them again to show my support for Marco, John, and others who assert rightly that Apple has a real quality problem on their hands, which higher management seems to be quite thick-skinned about. Blinded by their own brilliant marketing or something?
Apple does have a fantastic accessibility story. No other operating system I know has so many features for such a big variety of people built-in (speaking mostly for iOS now). But they’re on the verge of badly trumping that trust many people with disabilities put in them by delivering such poor quality updates that make it virtually impossible to take advantage of these features in full force. Especially when such basic functionality as I describe in Safari, and AppleVis summarize on their blog, are getting in the way of use every minute of every day now. And Apple really need to be careful that others may catch up sooner rather than later. On the web, the most robust accessibility is already being delivered by a different desktop browser/screen reader combination on a different operating system. As for mobile: Android is the lesser of the competition, even in its latest update, in my opinion. But Microsoft’s foundation is really solid in Windows Phone 8.1. They just need to execute on it much better, and they could really kick ass and become a viable alternative to Apple on mobile.
So here is my appeal to Tim Cook, CEO of Apple: Put action behind these words again! Go to these extraordinary lengths you speak of by not just cranking out new features that are half-baked, but make sure your engineers work on the over-all quality in a way that does not make your most faithful users feel like they’re being let down by your company! Because that is, exactly, how it feels. This trend started more strongly in iOS 7, and even worsened in iOS 8. And it has been with OS X even longer, starting in Mountain Lion and worsened ever since. Please, show us, our friends and family who started using your products because of our recommendations, and those of us who took a leap of faith early on and put our trust, our productivity, our daily lives in your products, that these are not just empty words and that that award you received actually means something beyond the minutes you gave that speech!
Sincerely,
a caring, but deeply concerned, user
http://www.marcozehe.de/2015/01/06/apple-are-losing-their-edge-also-in-accessibility-quality/
|
Bogomil Shopov: Coworking spaces in Brussels for Fosdem |
If you are heading to BXL for Fosdem and if you arrive earlier as I do you may need a coworking space. Thanks to twitter and some followers I got two nice hints I want to share with you. They both offer a free day for open source enthusiasts.
See you there: Beta cowork and Transforma BXL
P.S If you need hotel or information about Brussels, check out my post here.
|
Byron Jones: happy bmo push day! |
the following changes have been pushed to bugzilla.mozilla.org:
discuss these changes on mozilla.tools.bmo.
https://globau.wordpress.com/2015/01/06/happy-bmo-push-day-123/
|
Nigel Babu: Non-unified Builds on Try |
Last week, I was trying to fix a non-unified build bustage with glandium’s help and I kept failing to fix it in mozilla-inbound. I don’t usually build locally myself, so I had to push to Try. And there was no documentation on how-to do it. It turns out that it needs a custom mozconfig override (thanks dbaron).
You need to add ac_add_options --disable-unified-compilation
to the file
build/mozconfig.common.override
in one of your pushes. Probably easier, if
it’s the tip. We updated the wiki with this small piece of documentation.
This is mostly a note to myself. And perhaps for contributors who might search for it.
|
Nick Alexander: We should build Firefox for Android with an external build system |
For some time, I have been advocating for building Fennec with a full-featured Android build system. At one time, the state of the art was Ant; now, the solution blessed by Google is Gradle [1]; both Facebook (buck) and Twitter (pants) have built and published open-source build systems; and Google (again!) has built a separate proprietary internal build system (blaze) of their own. The major lesson I take from the proliferation of these tools is:
The best organizations invest in their tooling to make their people more productive.
However, this needs to be countered with the inescapable reality that build systems, and especially Android build systems, are big investments.
As the lone Firefox for Android build peer, I do not support Mozilla making a big investment into building our own Android build system.
For the sake of argument, however, such a Mozilla-developed build system would look like a blend of custom moz.build and a recursive Make backend. The amount of new and un-tested build system code would be very high; we’d incur serious risk, because all that new code needs to support a delicate combination of features that are constantly moving; and in all likelihood we wouldn’t achieve even 10% of what the existing solutions achieve in a calendar year [2].
Rather than diving deep into a particular illustration of a feature that would be difficult to express in such a Mozilla-developed build system, let me give instead a whirlwind tour of features we could take advantage of — today — if we used an existing solution.
Caveat: I have evaluated Gradle and buck extensively. I have not evaluated pants. (Blaze is not available.) Not everything claimed will be true for all external build systems.
These libraries could, at first, just be third-party libraries that we want to build against. (We’ve so far held off using, or modified to be "part of" Fennec, any third-party libraries that reference Android resources.)
In future, we could build Fennec as a set of loosely-coupled libraries. (In some ways we do: GeckoView, Stumbler, Search Activity, Background Services… we’re doing this in a limited way.) Such small libraries let us test faster and parallelize the development process.
There’s a possibility to reduce risk here, too: I claim that we can improve our existing practice of landing new features behind a build flag by developing mostly self-contained Android libraries and then including those libraries behind a build flag. This lets developers land the code in the tree, and possibly even compile and unit test on TBPL, but not actually ship in Fennec until the build flag is flipped.
Merging resources and manifests is a solved problem; let’s use an existing solution to do it.
Our existing build system supports compiling our existing libraries in parallel, but it doesn’t support DEXing in parallel [3]. Moving to an existing build system that compiles and DEXes in parallel will buy us a faster build even without separating our libraries. We get big wins (which I will try to quantify in the near future) if we do a little more work and split our libraries further.
We’re about to land Bug 1106593, which addresses the fact that we weren’t Proguarding everything we should have been. It adds two bespoke stages to our build. This type of change would be simpler if we were using an existing implementation (which would have certainly Done the Right Thing) and only had to modify a Proguard configuration.
Fennec is pushing up against the classes.dex size limit. Google recently added MultiDex support, which allows further run-time loaded classesN.dex files. Gradle exposes the additional build configuration with a handful of lines of code. Someday, we’ll probably need this support.
Gradle exposes a non-trivial build step for resource shrinking. We want it.
By configurations, I mean building a standard release version of Fennec, and a debug (non-Proguarded) version of Fennec for JUnit 3 testing, and possibly even multiple additional versions (resource constrained; API limited) all with one "TBPL build job". Such build configurations reduce automation machine load and reduce the risk of build flags being orphaned.
There’s a similar, but limited, feature known as split APKs that we might want too.
Gradle in particular has first class for bootstrapping itself via the Gradle wrapper. This wrapper prepares the Gradle part of the build environment automatically and reduces the time needed to get a build started.
External build systems, in particular Gradle, integrate with Maven/Ivy repositories for consuming dependencies. That means we can specify, in the tree, what versions we currently support and the build system will do the work of fetching, configuring, and installing them. It’s a faster first build and fewer frustrating dependency chases trying to figure out what changed and how your local machine is different from the official TBPL configuration.
Gradle integrates, mostly tightly, with IDEA IntelliJ and Google’s Android Studio. This is a big win for new contributors (just open mobile/android in your IDE to build) and a huge boon for seasoned developers who get the power of IntelliJ for their daily work.
Catches bugs! For free!
Right now, the moz.build based build system produces essentially all artifacts at build time. That includes test APKs, packaging the GeckoView library, an example application, etc. It’s slow! Most of the time we just want the ability to package GeckoView; we really just want a fresh Fennec APK.
This includes first class support for publishing the artifact to a Maven repository. This would eliminate the need for https://ci.mozilla.org/job/mozilla-central-geckoview/configure.
As the GeckoView library gains consumers, providing source JARs and Javadoc JARs will make consumers lives significantly easier.
Help me make our builds better! Discussion is best conducted on the mobile-firefox-dev mailing list and I’m nalexander on irc.mozilla.org and @ncalexander on Twitter.
[1] | Truthfully, Gradle with Google’s own rapidly developing Android-Gradle plugin. But this was the case with Ant, too: Google shipped an Ant build.xml that consumers extended and customized. |
[2] | That is, if Mozilla even was willing to pay a single developer to work on such a build system for a whole year. As it stands, we don’t pay a single full-time equivalent to work exclusively on any part of the build system. We get a lot of glandium’s time, a little of gps’s time, some of mshal’s time, some of my time, etc, etc. |
[3] | To be clear, we could support DEXing in parallel and merging the results, but it’s yet another layer of complexity in our Make-based build system. Add in supporting MultiDex and producing debug and release artifacts… let’s not. |
|
Jennifer Boriss: 8% Increase in reddit Account Registrations |
http://www.donotlick.com/2015/01/05/8-increase-in-reddit-account-registrations/
|
Armen Zambrano: Run Android test jobs locally |
http://feedproxy.google.com/~r/armenzg_mozilla/~3/liLSogEVuwc/run-android-test-jobs-locally.html
|
Roberto A. Vitillo: A/B test for Telemetry histograms |
A/B tests are a simple way to determine the effect caused by a change in a software product against a baseline, i.e. version A against version B. An A/B test is essentially an experiment that indiscriminately assigns a control or experiment condition to each user. It’s an extremely effective method to ascertain causality which is hard, at best, to infer with statistical methods alone. Telemetry comes with its own A/B test implementation, Telemetry Experiments.
Depending on the type of data collected and the question asked, different statistical techniques are used to verify if there is a difference between the experiment and control version:
Those are just the most commonly used methods.
The frequentist statistical hypothesis testing framework is based on a conceptually simple idea: assuming that we live in a world where a certain baseline hypothesis (null hypothesis) is valid, what’s the probability of obtaining the results we observed? If the probability is very low, i.e. under a certain threshold, we gain confidence that the effect we are seeing is genuine.
To give you a concrete example, say I have reason to believe that the average battery duration of my new phone is 5 hours but the manufacturer claims it’s 5.5 hours. If we assume the average battery has indeed a duration of 5.5 hours (null hypothesis), what’s the probability of measuring an average duration that is 30 minutes lower? If the probability is small enough, say under 5%, we “reject” the null hypothesis. Note that there are many things that can go wrong with this framework and one has to be careful in interpreting the results.
Telemetry histograms are a different beast though. Each user submits its own histogram for a certain metric, the histograms are then aggregated across all users for version A and version B. How do you determine if there is a real difference or if what you are looking at is just due to noise? A chi-squared test would seem the most natural choice but on second thought its assumptions are not met as entries in the aggregated histograms are not independent from each other. Luckily we can avoid to sit down and come up with a new mathematically sound statistical test. Meet the permutation test.
Say you have a sample of metric for users of version A and a sample of metric
for users of version B. You measure a difference of
between the means of the samples. Now you assume there is no difference between A and B and randomly shuffle entries between the two samples and compute again the difference of the means. You do this again, and again, and again… What you end up with is a distribution
of the differences of the means for the all the reshuffled samples. Now, you compute the probability of getting the original difference
, or a more extreme value, by chance and welcome our newborn hypothesis test!
Going back to our original problem of comparing aggregated histograms for the experiment and control group, instead of having means we have aggregated histograms and instead of computing the difference we are considering the distance; everything else remains the same as in the previous example:
def mc_permutation_test(xs, ys, num): n, k = len(xs), 0 h1 = xs.sum() h2 = ys.sum() diff = histogram_distance(h1, h2) zs = pd.concat([xs, ys]) zs.index = np.arange(0, len(zs)) for j in range(num): zs = zs.reindex(np.random.permutation(zs.index)) h1 = zs[:n].sum() h2 = zs[n:].sum() k += diff < histogram_distance(h1, h2) return k / num
Most statistical tests were created in a time where there were no [fast] computers around, but nowadays churning a Monte-Carlo permutation test is not a big deal and one can easily run one in a reasonable time.
http://robertovitillo.com/2015/01/05/ab-test-for-telemetry-histograms/
|
Jess Klein: Shall we dance? On-boarding Webmakers |
http://jessicaklein.blogspot.com/2015/01/shall-we-dance-on-boarding-webmakers.html
|
Will Kahn-Greene: ElasticUtils: I'm stepping down and deprecating the project |
ElasticUtils is^Wwas a Python library for building and executing Elasticsearch searches.
See the Quickstart for more details.
For the last few years, I've maintained ElasticUtils. It's a project I picked up after Jeff, Dave and Erik left.
Since I picked it up, we've had 13 or so releases. We switched the underlying library from pyes to pyelasticsearch and then to elasticsearch-py. We added support for a variety of Elasticsearch versions up to about 1.0. We overhauled the documentation. We overhauled the test suite. We generalized it from just a Django library into a library with a Django extension. We added MappingTypes. We worked really hard to make it backwards compatible from version to version. It's been a lot of work, but mostly in spurts.
Since the beginning of the project in May 2011 (a year before I picked up maintenance), we've had 33 people contribute patches. We've had many more point out issues, ask questions on the #elasticutils IRC channel, send me email about this and that.
That's a good run for an open source project. There are a bunch of things I wish I had done differently when I was at the helm, but it's a good run nonetheless.
The current state of things, however, is not great. ElasticUtils has a ton of issues. Lots of technical debt accrued over the years, several architectural decisions that turned out to suck and have obnoxious consequences, lack of support for new features in Elasticsearch > 1.0, etc. It'll take a lot of work to clean that up. Plus it's got a CamelCase name and that's so pass'e.
At PyCon 2014, Rob, Jannis and I worked with Honza on the API for the library that is now elasticsearch-dsl-py. This library has several of the things I like about ElasticUtils. It's a project that's being supported by the Elasticsearch folks. It's got momentum. It supports many of the Elasticsearch > 1.0 features. There are several libraries that sit on top of elasticsearch-dsl-py as Django shims now.
ElasticUtils is at a point where it's got a lot of problems and there are good alternatives that are better supported.
Thus, this is an excellent time for me to step down as maintainer. Going forward from today, I won't be doing any more development or maintenance.
Further, I'm deprecating the project because no one should be using an unmaintained broken project when there are better supported alternatives. That way lies madness!
So, if you're using ElasticUtils, what do you do now?
ElasticUtils 0.10.2 has the things you need to bridge from Elasticsearch 0.90 to 1.x. You could upgrade to that version and then switch to elasticsearch-dsl-py or one of the Django shims.
On that note, so long ElasticUtils and thank you to everyone who's helped on and used ElasticUtils over the years!
http://bluesock.org/~willkg/blog/dev/elasticutils/elasticutils.stepping_down
|
Ben Kero: Finding the perfect ancillary travel device |
As would be familiar to anybody who knows me, I’m always interested in new tech, especially when it’s running free software and portable enough to be in my every-day carry arsenal.
For the past month or so I’ve been looking at a few devices as a secondary to my laptop to carry with me. In a few weeks I’ll be joining those already there at third installment of Hackerbeach, on the Caribbean island of Dominica.
I wanted an embedded Linux system that could do everything. The scope of this device just kept getting bigger the more I thought about it.
It should:
I originally was looking at 3d printing an enclosure around a board like a Raspberry Pi, then duct taping a USB battery and 2.5” hard drive to it.
Over time the scope of this increased. One of the fond memories of last Hackerbeach was when we all huddled around a darkened wall in our fort to enjoy a movie. We had borrowed a small projector and speaker from one of the beach-front restaurants, as they didn’t need it because it was low season and they were not using it to perform any showings. We cobbled together a crude mount out of a table and various smartphones to angle the projector. The power was provided by a power-strip doing double duty as an extension cable *JUST* long enough that it could only work if it was suspended from a dining chair.
This year I wanted to make sure we have a good setup in case we want to do this again. A few years ago I had bought a ShowWX laser-based pico projector, but upon examining it again I found that the battery was dead and it only output 10 lumen of light. That would make viewing in anything but pure darkness less than pleasant.
While looking for separate tiny projectors (AAXA seems to have the market cornered) I was looking at the hardware that could power these.
After I decided to look what else was out there besides my Raspberry Pi concoction I found a Cubietruck combined with an Ewell case. This combination included space for a 2.5” SATA drive, a 5300mAh Lithium polymer battery and integrated charging circuitry, built in Wifi/Bluetooth, and HDMI output to connect to a projector.
Unfortunately from the sites I saw the case looks too big for my 19L backpack. Although I haven’t physically held a unit yet, there seems to be quite a bit of empty space inside the case. I’d like something exactly like this but in a smaller form factor. For my spartan carrying patterns this simply won’t work.
I hunted through more single-board ARM computers to find alternatives. Unfortunately none really come close. They’re all lacking some important feature.
After a while I decided to investigate having a computer with a projector built in. If I could find something that could integrate everything above AND a projector, I could save even more space. This would allow me to bring movies, and combined with a few compact Bluetooth controllers (NES30s) would allow us to play video games together.
This led me to a few solutions, each with compromises. The first I discovered was a Gigabyte PC/Projector. This little beauty is basically an Intel NUC with a 848x480 projector built into it. Unfortunately it doesn’t support a 2.5” hard drive, limiting it only to an MSATA SSD. Likewise it doesn’t come with a battery and draws 40W at peak consumption. Holy power consumption, batman!
After searching around a little more I came across a Liliputing article about the ZTE LivePro Android-based projector. This device seems to be a compromise, but is still the closest thing to what I want. It has an integrated 75-lumen LED projector with some 2013-era smartphone internals running Android 4.2. It features a 5Ah battery that can be used to charge USB devices, supports a (supposedly superlaggy) Wireless Display, and HDMI input with an assuredly god-awful 1W speaker. Likewise, it charges with a 12V adapter, which seems like a giant pain in the ass, but something entirely solvable.
Not terribly impressive, but I have a feeling that I can make it work for most of the requirements listed above, and if I can’t I can always put some leg work in to root it and run a GNU/Linux userland on it.
This strange device was demoed at the 2013 CES and eventually picked up by Sprint to be sold at a contract-tied discount. Unfortunately none are presently up on eBay, presumably because nobody in their right mind bought one.
The good news for crazy people like me is that a cellular-lacking version called the ZTE S-Pro is under preorder and due to be released January 6th. I’ve preordered one on Amazon Prime, and hope to return home to a delivered version when I’m done with linux.conf.au responsibilities.
This solution should be fairly compact. With this solution I should only need to bring:
On a scale of walnuts to meatballs, how crazy am I for wanting this, and for figuring out how to do it?
|
Kartikaya Gupta: Firewalling for fun and safety |
TL;DR: If you have a home wi-fi network, think about setting multiple separate VLANs as a "defense in depth" technique to protect hosts from malware.
The long version: A few years ago when I last needed to get a router, I got one which came with DD-WRT out of the box (made by Buffalo). I got it because DD-WRT (and Tomato) were all the rage back then and I wanted to try it out. While I was setting it up I noticed I could set up multiple Wi-Fi SSIDs on my home network, each with different authentication parameters. So I decided to create two - one for my own use (WPA2 encrypted) and one for guests (with a hidden SSID and no encryption). That way when somebody came over and wanted to use my Wi-Fi I could just give them the (hidden) SSID name and they would be able to connect without a password.
This turned out to a pretty good idea and served me well. Since then though I've acquired many more devices that also need Wi-Fi access and in the interest of security I've made my setup a little more complex. Consider the webcam I bought a few months ago. It shipped from somewhere in China and comes with software that I totally don't trust. Not only is it not open-source, it's not upgradeable and regularly tries to talk to some Amazon EC2 server. It would be pretty bad if malware managed to infect the webcam and not only used it to spy on me, but also used as a staging area to attack other devices on my network.
(Aside: most people with home Wi-Fi networks implicitly treat the router as a firewall, in that random devices outside the network can't directly connect to devices inside the network. For the most part this is true, but of course it's not hard for a persistent attacker to do periodic port scans to see if there are any hosts inside your network listening for connections via UPnP or whatever, and use that as an entrance vector if the service has vulnerabilities.)
Anyway, back to the webcam. I ended up only allowing it connect to an isolated Wi-Fi network and used firewall rules on the router to prevent all access to or from it, except to a single server which could access a single port on it. That server basically extracted the webcam feed and exposed it to the rest of my network. Doing this isn't a perfect solution but it adds a layer of security that makes it harder for malware to penetrate.
There's a ton of other Wi-Fi devices on my network - a printer, various smartphones, a couple of Sonos devices, and so on. As the "Internet of Things" grows this list is bound to grow as well. If you care about ensuring the security of machines on your network, and not letting become part of some random hacker's botnet, knowing how to turn your router into a full-fledged firewall is a very useful tool indeed. Even if you choose not to lock things down to the extent that I do, simply monitoring connections between devices inside your network and hosts outside your network can be a huge help.
|
Toni Hermoso Pulido: Mozilla Janus proxy in a Raspberry Pi |
These winter holidays I've been taking care of my home network and I finally found some time to set up a Raspberry Pi for my everyday Internet needs.
In the process of checking different proxy services I stumbled on Janus, a Node.js SPDY-based proxy. According to this blogpost, the service seems to be targeting mobile contexts, somehow providing a similar service as Opera Turbo or more recent Google Data Compression Proxy. But, compared to these latter options, Janus server is fully open-source. Moreover, as an interesting optional feature, it's also possible to block ads based on this ad list provider.
You can already try the service by yourself by using last Firefox version and this addon (for Firefox and Firefox mobile) and the default proxy server.
However, since I wanted to manage my own proxy service (for privacy and performance reasons), I decided to install the proxy server in a Raspberry Pi of my own.
Since Janus is a Node.js application, you need first to set up a working environment, for instance with tools like nvm. For required libraries, such as mozjpeg, some extra packages may be needed, easily installable with 'apt-get', such as: libtool or libpng-dev. If module building fails, you're likely to need another development package to be installed.
Proxy configuration can be set up via config/default.yml
file. For instance, proxy port can be changed there and several other options, such as logging or enabling ad blocking by default. A recommendable feature is enabling Redis database for caching purposes (take into account that Redis default port is 6379, so you may want to change the value provided there).
Once set up, a PAC URL can be found at: https://yourRaspberryIP:55055
(unless you changed the default port). By the way, notice the 'https' procol when you type the address.
And that's all. Since there is no authentication mechanism so far included, you have to be careful if you plan to open the proxy port from your home network to the Internet (I didn't try whether HTTP authentication behind a reverse proxy might work).
As a side note, from my short experience, even though the addon allows you to tune whether you prefer bandwidth save or lower latency, I must admit that Raspberry Pi might still not be the most suitable machine for this work and a more powerful single board computer (let's say a Cubietruck) might do a better job.
https://www.cau.cat/blog/mozilla_janus_proxy_in_a_raspberry_pi
|
John O'Duinn: RelEngCon 2015: Call for papers |
Preparations for RelEngConf 2015 are officially in full swing. This means two things:
1) RelEngCon 2015 is now accepting proposals for talks/sessions. If you have a good industry-related or academic-focused topic in the area of Release Engineering, please have a look at the Release Engineering conference guidelines, and submit your proposal before the deadline of 23-jan-2015.
2) Both RelEngCon 2014 and RelEngCon 2013 were great. The mixture of attendees and speakers, from academia and battle-hardened industry, made for some riveting topics and side discussions. Its too early to tell who exactly will be speaking in 2015, but its not too early to start planning your travel to Florence, Italy!! Also of note: RelEngCon 2015 will be just a few weeks after the publication of IEEE 1st Special Issue on Release Engineering. Looks like RelEngConf 2015 is going to be special also.
For further details about the conference, or submitting proposals, see http://releng.polymtl.ca/RELENG2015/html/index.html. If you build software delivery pipelines for your company, or if you work in a software company that has software delivery needs, I recommend you follow @relengcon, block off May 19th, 2015 on your calendar and book now. It will be well worth your time.
See you there!
John.
http://oduinn.com/blog/2015/01/04/relengcon-2015-call-for-papers/
|
Pascal Chevrel: My Q3/Q4-2014 report |
I didn't write the Q3 report, so this report covers both Q3 and Q4.
Regular l10n-drivers work such as patch reviews, pushes to production, meetings and past projects maintenance excluded.
We released version 3.5 of Transvision and I worked on these features and bug fixes:
start.sh
after cloning the repository and it will set up everything and launch a Transvision instance locally. Setting up Transvision locally for development takes now about 20mn with a fast DSL connection.Overall, I worked a lot on Transvision during Q3 but very little during Q4. There was a good balance of improvements targetting our different users (localizers, Transvision devs, API consumers) and we did some solid improvements to both the code and our workflow to ship faster in 2015. We also had 8 people contributing to the codebase over the semester and several people popping in our IRC channel asking about the proejct, which is good. I have several big patches with good potential in branches but unfortunately I didn't have time to work on finishing them in Q4 and it seems I won't be able to work on them either in Q1, maybe in Q2.
I found a way to generate data about web parts localization over time from Langchecker (Langchecker was not meant to do that, so I had to be creative), so now we generate graphs that show the history of web parts completion per locale: example with Bulgarian showing what happens when you have a new high performer localizer joining Mozilla
I updated the overview status showing the number of untranslated strings per locales with a summary ranking locales status and also showing locales on locamotion it also now counts strings for all projects on the wbedashboard, not just mozilla.org.
I added the number of strings, words and files to the summary view since this is something we got asked for often from other teams.
Historically, the process of publishing updates to the Firefox page on Google Play has been owned by many different people, most if not all of them no longer working for Mozilla. So there was no real formal process to update the copy and get that copy translated. Sylvestre Ledru (Debian core dev that recently joined the Mozilla Release Management team) decided to establish a process with as much automation as possible via the Google Play API to update our copy in English when we ship a new release and I decided to also help on this front to get that content localized and published simultaneaously with English.
So now the process of translating the copy is back to the l10n-drivers team with our own tools (means tracking, dashboards, integration to the translation platforms we use…).
I created a small app to track and QA translations without pushing to Google Play.
And I also created an associated JSON API that release drivers can use.
I am working with Sylvestre on getting release drivers tools automatically update our localized listings along with the en-US copy. We hope to get that working and functionnal in Q1 and be able to always have an updated copy for all of our supported locale for Firefox on Google Play from now on.
This is localization management work that I am doing as a volunteer as Mozilla does not put paid resources on Thunderbird. My colleague Flod is also helping, as well as Th'eo (my intern and French localizer) and Kohei Yoshino (ex Mozilla Japan webdev and ex mozilla.org localizer, now living in Canada and still contributing to www.mozilla.org at the code level).
Thunderbird web content is still hosted on the old mozilla.org framework, this content is not actively maintained by anybody and that framework will eventually disappear. The l10n process is also broken there. For Thunderbird 10th anniversary, I volunteered to help manage the key content for Thunderbird on our current platform (Bedrock) so as that Thunderbird in-product pages can get updated and translated. If you use Thunderbird, you may have seen the new version of the start page (some people noticed).
A few notes though: * If you are a Mozilla localizer but are not interested in translating Thunderbird content, please don't translate it even if you see that on your locale dashboard, help us find a person to do these translations for Thunderbird in you language instead! * I am not volunteering to manage Thunderbird product localization, I just don't have enough time for that. If you want to help on that front, please contact the new Thunderbird team leaders, they will be happy to get help on that front! * 23 locales have translated the page, thanks to you guys! If your locale is not done and you want to get involved in Mozilla Web localization, please contact me! (pascal AT mozilla DOT com)
I try to spend as much time as I can actively finding new localizers and growing our localization community with core contributors that help us ship software and projects. It's great to build tools and we are always happy when we can improve by 20% the productivity of a localizer with a new process or tool, but when we find a new core localizer for a language, we get a lot more than a 20% productivity boost ;). After all, we build tools for people to use them.
New Latvian localizer for webparts: Janis Marks Gailis. Latvian does not have many speakers and historically we had only one localizer (Raivis Deijus) focused on products, so now we can say that we have a team spanning both products and web sites. A third person would be welcome of course
New Croatian localizer Antun Koncic. Antun used to be the Croatian dictionary packager for Thunderbird and was no longer involved in Mozilla, welcome back then!
We could use the help of a couple more people for Polish, especially to translate products and technical content, if you are interested, please leave a comment
I went to the Encontro ib'erico de tradutores de software libre a linguas minorizadas in Santiago de Compostela, Spain. I met there several of our Mozilla localizers for Catalan, Galician, Basque, Aragonese and Asturian. We talked about processes, tools and community building. Was very interesting and great fun.
Like a thousand more Mozilla employees and many volunteers, I spent a work week in Portland, USA where I met several localizers we had invited. It was the first time I was meeting some of these localizers IRL, in particular people from India and Bangladesh. That was nice.
In parallel to the 10 years of Mozilla, we created and launched a Firefox for Android extension called Privacy Coach teaching users how to set up privacy settings in their browser.
I worked with Margaret Leibovic on that project and I reused the code and process that I had used for the add-ons we had created during the Australis launch earlier this year to translate the add-on while it was being developped. The idea is that it shouldn't be discruptive for either localizers or developers.
We worked with .lang files that were automatically converted to the right set of dtd/properties files for the extension along with the packaging of the locale into the extension (update of install.rdf/chrome.manifest/locale folder) that were integrated into the extenson via pull requests on github during its development. We got 12 locales done which is more than the 8 locales initially planned. Two locales we wanted to get we didn't in the end (Hungarian and Indonesian) so they are part of the locales I will work with in Q1 to get more people.
http://www.chevrel.org/carnet/?post/2015/01/03/My-Q3/Q4-2014-report
|