Daniel Stenberg: Firefox OS Flatfish Bluedroid fix |
Hey, when I just built my own Firefox OS (b2g) image for my Firefox OS Tablet (flatfish) straight from the latest sources, I ran into this (known) problem:
Can't find necessary file(s) of Bluedroid in the backup-flatfish folder. Please update the system image for supporting Bluedroid (Bug-986314), so that the needed binary files can be extracted from your flatfish device.
So, as I struggled to figure out the exact instructions on how to proceed from this, I figured I should jot down what I did in the hopes that it perhaps will help a fellow hacker at some point:
Enjoy!
http://daniel.haxx.se/blog/2014/08/29/flatfish-bluedroid-fix/
|
Wladimir Palant: Using a Firefox extension to work around Selenium WebDriver's limitations |
My Google search link fix extension had a bunch of regressions lately and I realized that testing its impact on the search pages manually isn’t working — these pages are more complicated than it looks like, and there are lots of configuration options affecting them. So I decided looking into Selenium WebDriver in order to write integration tests that would automate Firefox. All in all, writing the tests is fairly simple once you get used to the rather arcane API. However, the functionality seems to be geared towards very old browsers (think IE6) and some features are nowhere to be found.
One issue: there is no way to focus an element without clicking it. Clicking isn’t always an option, since it might trigger a link for example. That issue turned out to be solved fairly easily:
driver.execute_script("arguments[0].focus()", element)
The ability to pass elements as parameters to WebDriver.execute_script
is very useful, so it is surprising that it doesn’t seem to be documented properly anywhere.
But what about working with tabs or middle-clicking links? It seems that tabbed browsing wasn’t invented yet back when that API was designed, so it only has a concept of windows — not very useful. So WebDriver will only let you work with the currently selected tab, inactive tabs are off limits. And WebDriver.execute_script
isn’t any help here either, it won’t let you run privileged code.
After briefly considering using send_keys
functionality to open Web Console on about:config
and typing code into it (yes, it looks like that would actually work), I decided to go with a less crazy solution: install an additional extension to implement the necessary functionality. So if a test wants the element to be middle-clicked it can trigger a custom event:
driver.execute_script('''
var event = document.createEvent("Events");
event.initEvent("testhelper_middleclick", true, false);
arguments[0].dispatchEvent(event);
''', element)
And the extension listens to that event:
window.gBrowser.addEventListener("testhelper_middleclick", function(event)
{
let utils = event.target.ownerDocument.defaultView
.QueryInterface(Ci.nsIInterfaceRequestor)
.getInterface(Ci.nsIDOMWindowUtils);
let rect = event.target.getBoundingClientRect();
utils.sendMouseEvent("mousedown", rect.left + 1, rect.top + 1, 1, 1, 0);
utils.sendMouseEvent("mouseup", rect.left + 1, rect.top + 1, 1, 1, 0);
}, false, true);
This works nicely, but what if you want to get data back? For example, I want to know which URLs were requested at the top level — in particular, whether there was a redirect before the final URL. Selenium only allows you to get notified of URL changes that were initiated by Selenium itself (not very helpful) or poll driver.current_url
(doesn’t work). The solution is to have the extension register a progress listener and write all URLs seen to the Browser Console:
window.gBrowser.addTabsProgressListener({
onStateChange: function(browser, webProgress, request, flags, status)
{
if (!(flags & Ci.nsIWebProgressListener.STATE_IS_WINDOW))
return;
if (!(flags & Ci.nsIWebProgressListener.STATE_START) && !(flags & Ci.nsIWebProgressListener.STATE_REDIRECTING))
return;
if (request instanceof Ci.nsIChannel)
Cu.reportError("[testhelper] Loading: " + request.URI.spec);
}
});
You can use driver.get_log("browser")
to retrieve the full list of console messages. Each message also has a timestamp which allows for example only extracting the URLs seen after the previous check.
Side-note: I first considered using MozMill for this. However, it is geared very much towards Firefox development and much of the Selenium functionality would have to be reimplemented (locating installed Firefox instance, default Firefox preferences for a test profile, dismissing alerts on web pages and so on).
|
Priyanka Nag: Maker Party gets grander in Pune this time |
![]() |
Maker Party, Pune |
http://priyankaivy.blogspot.com/2014/08/maker-party-gets-grander-in-pune-this.html
|
Daniel Stenberg: Going to FOSDEM 2015 |
Yeps,
I’m going there and I know several friends are going too, so this is just my way of pointing this out to the ones of you who still haven’t made up your mind! There’s still a lot of time left as this event is taking place late January next year.
I intend to try to get a talk to present this time and I would love to meet up with more curl contributors and fans.
|
Byron Jones: happy bmo push day! |
the following changes have been pushed to bugzilla.mozilla.org:
discuss these changes on mozilla.tools.bmo.
http://globau.wordpress.com/2014/08/27/happy-bmo-push-day-110/
|
Monica Chew: Firefox 32 supports Public Key Pinning |
http://monica-at-mozilla.blogspot.com/2014/08/firefox-32-supports-public-key-pinning.html
|
Gervase Markham: Email Account Phishers Do Manual Work |
For a while now, criminals have been breaking into email accounts and using them to spam the account’s address book with phishing emails or the like. More evil criminals will change the account password, and/or delete the address book and the email to make it harder for the account owner to warn people about what’s happened.
My mother recently received an email, purportedly from my cousin’s husband, titled “Confidential Doc”. It was a mock-up of a Dropbox “I’ve shared an item with you” email, with the “View Document” URL actually being http://proshow.kz/excel/OLE/PPS/redirect.php. This (currently) redirects to http://www.affordablewebdesigner.co.uk/components/com_wrapper/views/wrapper/tmpl/dropbox/, although it redirected to another site at the time. That page says “Select your email provider”, explaining “Now, you can sign in to dropbox with your email”. When you click the name of your email provider, it asks you for your email address and password. And boom – they have another account to abuse.
But the really interesting thing was that my mother, not being born yesterday, emailed back saying “I’ve just received an email from you. But it has no text – just an item to share. Is it real, or have you been hacked?” So far, so cautious. But she actually got a reply! It said:
Hi
,
I sent it, It is safe.
(The random capital was in the original.)
Now, this could have been a very smart templated autoresponder, but I think it’s more likely that the guy stayed logged into the account long enough to “reassure” people and to improve his hit rate. That might tell us interesting things about the value of a captured email account, if it’s worth spending manual effort trying to convince people to hand over their creds.
http://feedproxy.google.com/~r/HackingForChrist/~3/fkIc1eCIr3w/
|
Alex Vincent: An insightful statement from a mathematics course |
I’m taking a Linear Algebra course this fall. Last night, my instructor said something quite interesting:
“We are building a model of Euclidean geometry in our vector space. Then we can prove our axioms of geometry (as theorems).”
This would sound like technobabble to me even a week ago, but what he’s really saying is this:
“If you can implement one system’s basic rules or axioms in another system, you can build a model of that first system in the second.”
Programmers and website builders build models of systems all the time, and unconsciously, we build on top of other systems. Think about that when you write JavaScript code: the people who implement JavaScript engines are building a model for millions of people to use that they’ll never meet. I suppose the same could be said of any modern programming language, compiler, transpiler or interpreter.
The beauty for those of us who work in the model is that we (theoretically) shouldn’t need to care what platform we run on. (In practice, there are differences, which is why we want platforms to implement standards, so we can concentrate on using the theoretical model we depend on.)
On the flip side, that also means that building and maintaining that fundamental system we build on top of has to be done very, very carefully. If you’re building something for others to use (and chances are, when you’re writing software, you’re doing exactly that), you really have to think about how you want others to use your system, and how others might try to use your system in ways you don’t expect.
It’s really quite a profound duty that we take on when we craft software for others to use.
|
Chris AtLee: Gotta Cache 'Em All |
Waaaaaaay back in February we identified overall network bandwidth as a cause of job failures on TBPL. We were pushing too much traffic over our VPN link between Mozilla's datacentre and AWS. Since then we've been working on a few approaches to cope with the increased traffic while at the same time reducing our overall network load. Most recently we've deployed HTTP caches inside each AWS region.
The primary target for caching was downloads of build/test/symbol packages by test machines from file servers. These packages are generated by the build machines and uploaded to various file servers. The same packages are then downloaded many times by different machines running tests. This was a perfect candidate for caching, since the same files were being requested by many different hosts in a relatively short timespan.
Tooltool is a simple system RelEng uses to distribute static assets to build/test machines. While the machines do maintain a local cache of files, the caches are often empty because the machines are newly created in AWS. Having the files in local HTTP caches speeds up transfer times and decreases network load.
Initial deployment was completed on August 8th (end of week 32 of 2014). You can see by the graph above that we've cut our bandwidth by about 50%!
There are a few more low hanging fruit for caching. We have internal pypi repositories that could benefit from caches. There's a long tail of other miscellaneous downloads that could be cached as well.
There are other improvements we can make to reduce bandwidth as well, such as moving uploads from build machines to be outside the VPN tunnel, or perhaps to S3 directly. Additionally, a big source of network traffic is doing signing of various packages (gpg signatures, MAR files, etc.). We're looking at ways to do that more efficiently. I'd love to investigate more efficient ways of compressing or transferring build artifacts overall; there is a ton of duplication between the build and test packages between different platforms and even between different pushes.
Great! As always, all our work has been tracked in a bug, and worked out in the open. The bug for this project is 1017759. The source code lives in https://github.com/mozilla/build-proxxy/, and we have some basic documentation available on our wiki. If this kind of work excites you, we're hiring!
Big thanks to George Miroshnykov for his work on developing proxxy.
|
Byron Jones: happy bmo push day! |
the following changes have been pushed to bugzilla.mozilla.org:
discuss these changes on mozilla.tools.bmo.
http://globau.wordpress.com/2014/08/26/happy-bmo-push-day-109/
|
Daniel Stenberg: My home setup |
I work in my home office which is upstairs in my house, perhaps 20 steps from my kitchen and the coffee refill. I have a largish desk with room for a number of computers. The photo below shows the three meter beauty. My two kids have their two machines on the left side while I use the right side of it for my desktop and laptop.
The kids use my old desktop computer with a 20'' Dell screen and my old 15.6'' dual-core Asus laptop. My wife has her laptop downstairs and we have a permanent computer installed underneath the TV for media (an Asus VivoPC).
I’m primarily developing C and C++ code and I’m frequently compiling rather large projects – repeatedly. I use a desktop machine for my ordinary development, equipped with a fairly powerful 3.5GHz quad-core Core-I7 CPU, I have my OS, my home dir and all source code put on an SSD. I have a larger HDD for larger and slower content. With ccache and friends, this baby can build Firefox really fast. I put my machine together from parts myself as I couldn’t find a suitable one focused on horse power but yet a “normal” 2D graphics card that works fine with Linux. I use a Radeon HD 5450 based ASUS card, which works fine with fully open source drivers.
I have two basic 24 inch LCD monitors (Benq and Dell) both using 1920x1200 resolution. I like having lots of windows up, nothing runs full-screen. I use KDE as desktop and I edit everything in Emacs. Firefox is my primary browser. I don’t shut down this machine, it runs a few simple servers for private purposes.
My machines (and my kids’) all run Debian Linux, typically of the unstable flavor allowing me to get new code reasonably fast.
My desktop keyboard is a Func KB-460, mechanical keyboard with some funky extra candy such as red backlight and two USB ports. Both my keyboard and my mouse are wired, not wireless, to take away the need for batteries or recharging etc in this environment. My mouse is a basic and old Logitech MX 310.
I have a crufty old USB headset with a mic, that works fine for hangouts and listening to music when the rest of the family is home. I have Logitech webcam thing sitting on the screen too, but I hardly ever use it for anything.
I need to sometimes move around and work from other places. Going to conferences or even our regular Mozilla work weeks. Hence I also have a laptop that is powerful enough to build Firefox is a sane amount of time. I have a Lenovo Thinkpad W540 with a 2.7GHz quad-core Core-I7, 16GB of RAM and 512GB of SSD. It has the most annoying touch pad on it. I don’t’ like that it doesn’t have the explicit buttons so for example both-clicking (to simulate a middle-click) like when pasting text in X11 is virtually impossible.
On this machine I also run a VM with win7 installed and associated development environment so I can build and debug Firefox for Windows on it.
I have a second portable. A small and lightweight netbook, an Eeepc S101, 10.1'' that I’ve been using when I go and just do presentations at places but recently I’ve started to simply use my primary laptop even for those occasions – primarily because it is too slow to do anything else on.
I do video conferences a couple of times a week and we use Vidyo for that. Its Linux client is shaky to say the least, so I tend to use my Nexus 7 tablet for it since the Vidyo app at least works decently on that. It also allows me to quite easily change location when it turns necessary, which it sometimes does since my meetings tend to occur in the evenings and then there’s also varying amounts of “family activities” going on!
For backup, I have a Synology NAS equipped with 2TB of disk in a RAID stashed downstairs, on the wired in-house gigabit ethernet. I run an rsync job every night that syncs the important stuff to the NAS and I run a second rsync that also mirrors relevant data over to a friends house just in case something terribly bad would go down. My NAS backup has already saved me really good at least once.
Next to the NAS downstairs is the house printer, also attached to the gigabit even if it has a wifi interface of its own. I just like increasing reliability to have the “fixed services” in the house on wired network.
The printer also has scanning capability which actually has come handy several times. The thing works nicely from my Linux machines as well as my wife’s windows laptop.
I have fiber going directly into my house. It is still “just” a 100/100 connection in the other end of the fiber since at the time I installed this they didn’t yet have equipment to deliver beyond 100 megabit in my area. I’m sure I’ll upgrade this to something more impressive in the future but this is a pretty snappy connection already. I also have just a few milliseconds latency to my primary servers.
Having the fast uplink is perfect for doing good remote backups.
I have a lowly D-Link DIR 635 router and wifi access point providing wifi for the 2.4GHz and 5GHz bands and gigabit speed on the wired side. It was dead cheap it just works. It NATs my traffic and port forwards some ports through to my desktop machine.
The router itself can also update the dyndns info which ultimately allows me to use a fixed name to my home machine even without a fixed ip.
Frequent Wifi users in the household include my wife’s laptop, the TV computer and all our phones and tablets.
When I installed the fiber I gave up the copper connection to my home and since then I use IP telephony for the “land line”. Basically a little box that translates IP to old phone tech and I keep using my old DECT phone. We basically only have our parents that still call this number and it has been useful to have the kids use this for outgoing calls up until they’ve gotten their own mobile phones to use.
It doesn’t cost very much, but the usage is dropping over time so I guess we’ll just give it up one of these days.
I have a Nexus 5 as my daily phone. I also have a Nexus 7 and Nexus 10 that tend to be used by the kids mostly.
I have two Firefox OS devices for development/work.
|
Kaustav Das Modak: Dear smartphone user, it is time to unlearn |
|
Zack Weinberg: The literary merit of right-wing SF |
The results are in for the 2014 Hugo Awards. I’m pleased with the results in the fiction categories—a little sad that “The Waiting Stars” didn’t win its category, but it is the sort of thing that would not be to everyone’s taste.
Now that it’s all over, people are chewing over the politics of this year’s shortlist, particularly the infamous “sad puppy” slate, over on John Scalzi’s blog, and this was going to be a comment there, but I don’t seem to be able to post comments there, so y’all get the expanded version here instead. I’m responding particularly to this sentiment, which I believe accurately characterizes the motivation behind Larry Correia’s original posting of his slate, and the motivations of those who might have voted for it:
I too am someone who likes, and dislikes, works from both groups of authors. However, only one group ever gets awards. The issue is not that you cannot like both groups, but that good works from the PC crowd get rewarded and while those from authors that have been labeled “unacceptable” are shunned, and that this happens so regularly, and with such predictability that it is obviously not just quality being rewarded.
- “BrowncoatJeff”
I cannot speak to the track record, not having followed genre awards closely in the past. But as to this year’s Hugo shortlist, it is my considered opinion that all the works I voted below No Award (except The Wheel of Time, whose position on my ballot expresses an objection to the eligibility rules) suffer from concrete, objective flaws on the level of basic storytelling craft, severe enough that they did not deserve a nomination. This happens to include Correia’s own novels, and all the other works of fiction from his slate that made the shortlist. Below the fold, I shall elaborate.
(If you’re not on board with the premise that there is such a thing as objective (observer-independent) quality in a work of art, and that observers can evaluate that independently from whether a work suits their own taste or agrees with their own politics, you should probably stop reading now. Note that this is not the same as saying that I think all Hugo voters should vote according to a work’s objective quality. I am perfectly fine with, for instance, the people who voted “Opera Vita Aeterna” below No Award without even cracking it open—those people are saying “Vox Day is such a despicable person that no matter what his literary skills are, he should not receive an award for them” and that is a legitimate critical stance. It is simply not the critical stance I am taking right now.)
Let me first show you the basic principles of storytelling craft that I found lacking. I did not invent them; similar sentiments can be found in, for instance, “Fenimore Cooper’s Literary Offenses,” the Turkey City Lexicon, Ursula LeGuin’s Steering the Craft, Robert Schroeck’s A Fanfic Writer’s Guide To Writing, and Aristotle’s Poetics. This formulation, however, is my own.
- Above all, a story must not be boring. The reader should care, both about “what happens to these people,” and about the ultimate resolution to the plot.
- Stories should not confuse their readers, and should enable readers to anticipate—but not perfectly predict—the consequences of each event.
- The description, speech, and actions of each character in a story should draw a clear, consistent picture of that character’s personality and motivations, sufficient for the reader to anticipate their behavior in response to the plot.
- Much like music, stories should exhibit dynamic range in their pacing, dramatic tension, emotional color, and so forth; not for nothing is “monotony” a synonym for “tedium.”
- Style, language, and diction should be consistent with the tone and content of the story.
- Rules 2–5 can be broken in the name of Art, but doing so demands additional effort and trust from the reader, who should, by the end of the story, believe that it was worth it.
With that in hand, I shall now re-review the works that didn’t deserve (IMNSHO) to make the shortlist, in order from most to least execrable.
This is textbook bad writing. The most obvious problem is the padded, purple, monotonously purple prose, which obviously fails point 4, and less obviously fails point 5 because the content isn’t sufficiently sophisticated to warrant the style. The superficial flaws of writing are so severe that it’s hard to see past them, but if you do, you discover that it fails all the other points as well, simply because there wasn’t enough room, underneath all of those purple words, for an actual plot. It’s as if you tried to build a building entirely out of elaborate surface decorations, without first putting up any sort of structural skeleton.
These are both character studies, which is a difficult mode: if you’re going to spend all of your time exploring one character’s personality, you’d better make that one character interesting, and ideally also fun to be around. In these cases, the authors were trying for tragically flawed antiheroes and overdid the anti-, producing characters who are nothing but flaw. Their failures are predictable; their manpain, tedious; their ultimate fates, banal. It does not help that they are, in many ways, the same extruded antihero product that Hollywood and the comic books have been foisting on us for going on two decades now, just taken up to 11.
Khardov also fails on point 2, being told out of order for no apparent reason, causing the ending to make no sense. Specifically, I have no idea whether the wild-man-in-the-forest scenes are supposed to occur before or after the climactic confrontation with the queen, and the resolution is completely different depending on which way you read it.
Meathouse Man was not on Correia’s slate. It’s a graphic novel adaptation of a story written in the 1970s, and it makes a nice example of point 6. When it was originally written, a story with a completely unlikable protagonist, who takes exactly the wrong lessons from the school of hard knocks and thus develops from a moderate loser into a complete asshole, would perhaps have been … not a breath of fresh air, but a cold glass of water in the face, perhaps. Now, however, it is nothing we haven’t seen done ten billion times, and we are no longer entertained.
These are told competently, with appropriate use of language, credible series of events, and so on. The plots, however, are formula, the characters are flat, the ideas are not original, and two months after I read them, I’m hard pressed to remember enough about them to criticize!
I may be being more harsh on Torgerson than the median voter, because I have read Enemy Mine and so I recognize The Chaplain’s Legacy as a retread. (DOES NO ONE READ THE CLASSICS?!) Similarly, The Exchange Officers is prefigured by hundreds of works featuring the Space Marines. I don’t recall seeing remotely piloted mecha before, but mecha themselves are clich'e, and the “remotely piloted” part sucks most of the suspense out of the battle scenes, which is probably why it hasn’t been done.
Correia’s own work, this falls just short of good, but in a way that is more disappointing than if it had been dull and clich'ed. Correia clearly knows how to write a story that satisfies all of the basic storytelling principles I listed. He is never dull. He comes up with interesting plots and gets the reader invested in their outcome. He’s good at set pieces; I can still clearly envision the giant monster terrorizing Washington DC. He manages dramatic tension effectively, and has an appropriate balance between gripping suspense and calm quiet moments. And he is capable of writing three-dimensional, nuanced, plausibly motivated, sympathetic characters.
It’s just that the only such character in these novels is the principal villain.
This is not to say that all of the other characters are flat or uninteresting; Sullivan, Faye, and Francis are all credible, and most of the other characters have their moments. Still, it’s the Chairman, and only the Chairman, who is developed to the point where the reader feels fully able to appreciate his motivations and choices. I do not say sympathize; the man is the leader of Imperial Japan circa 1937, and Correia does not paper over the atrocities of that period—but he does provide more justification for them than anyone had in real life. There really is a cosmic horror incoming, and the Chairman really does think this is the only way to stop it. And that makes for the best sort of villain, provided you give the heroes the same depth of characterization. Instead, as I said last time, the other characters are all by habit unpleasant, petty, self-absorbed, and incapable of empathizing with people who don’t share their circumstances. One winds up hoping for enough reverses to take them down a peg. (Which does not happen.)
Looking back, does any of that have anything to do with any of the authors’ political stances, either in the real world, or as expressed in their fiction? Not directly, but I do see a common thread which can be interpreted to shed some light on why “works from the PC crowd” may appear to be winning a disproportionate number of awards, if you are the sort of person who uses the term “PC” unironically. It’s most obvious in the Correia, being the principal flaw in that work, but it’s present in all the above.
See, I don’t think Correia realized he’d written all of his Good Guys as unpleasant, petty, and self-absorbed. I think he unconsciously assumed they didn’t need the same depth of character as the villain did, because of course the audience is on the side of the Good Guys, and you can tell who the Good Guys are from their costumes (figuratively speaking). It didn’t register on him, for instance, that a captain of industry who’s personally unaffected by the Great Depression is maybe going to come off as greedy, not to mention oblivious, for disliking Franklin Delano Roosevelt and his policies, even if the specific policy FDR was espousing on stage was a genuinely bad idea because of its plot consequences. In fact, that particular subplot felt like the author had his thumb on the scale to make FDR look bad—but the exact same subplot could have been run without giving any such impression, if the characterization had been more thorough. So if you care about characterization, you’re not likely to care for Correia’s work or anything like it. Certainly not enough to shortlist it for an award honoring the very best the genre has to offer.
Now, from out here on my perch safely beyond the Overton window, “politically correct,” to the extent it isn’t a vacuous pejorative, means “something which jars the speaker out of his daydream of the lily-white suburban 1950s of America (possibly translated to outer space), where everything was pleasant.” (And I do mean his.) Thing is, that suburban daydream is, still, 60 years later, in many ways the default setting for fiction written originally in English. Thanks to a reasonably well-understood bug in human cognition, it takes more effort to write fiction which avoids that default. It requires constant attention to ensure that presuppositions and details from that default are not slipping back in. And most of that extra effort goes into—characterization. It takes only a couple sentences to state that your story is set in the distant future Imperium of Man, in which women and men alike may serve in any position in the military and are considered completely equal; it takes constant vigilance over the course of the entire novel to make sure that you don’t have the men in the Imperial Marines taking extra risks to protect from enemy fire those of their fellow grunts who happen to be women. Here’s another, longer example illustrating how much work can be involved.
Therefore, it seems to me that the particular type of bad characterization I disliked in the above works—writing characters who, for concrete in-universe reasons, are unlikable people, and then expecting the audience to cheer them on anyway because they’ve been dressed up in These Are The Heroes costumes—is less likely to occur in writing that would get labeled “works from the PC crowd.” The authors of such works are already putting extra effort into the characterization, and are therefore less likely to neglect to write heroes who are, on the whole, likable people whom the reader wishes to see succeed.
https://www.owlfolio.org/fiction/the-literary-merit-of-right-wing-sf/
|
Clint Talbert: The Odyssey of Per-Push, On-Phone Firefox OS Automation |
When we started automating tests for Firefox OS, we knew that we could do a lot with automated testing on phone emulators–we could run in a very similar environment to the phone, using the same low level instruction set, even do some basic operations like SMS between two emulator processes. Best of all, we could run those in the cloud, at massive scale.
But, we also knew that emulator based automation wasn’t ever going to be as complete as actually testing on real phones. For instance, you can’t simulate many basic smart phone operations: calling a number, going to voice-mail, toggling airplane mode, taking a picture, etc. So, we started trying to get phones running in automation very early with Firefox OS, almost two years ago now.
We had some of our very early Unagi phones up and running on a desk in our office. That eventually grew to a second generation of Hamachi based phones. There were a couple of core scalability problems with both of these solutions:
Because of points 1 and 2, we were unable to truly scale the number of devices. We only had one person in Mountain View, and what we had thought of as a part time job of pulling phone batteries soon became his full time job. We needed a better solution to increase the number of devices while we worked in parallel to create a better dashboard for our automation that would allow a system like this to easily plug in and report its results.
The Flame reference device solved that first problem. Now, we had a phone whose hardware we could depend on, and Jon Hylands was able to create a custom battery harnesses for it so that we could instruct our scripts to automatically detect dead phones and remotely power cycle them (and in the future, monitor power consumption). Because we (Mozilla) commissioned the Flame phone ourselves, there were no partner related issues with obtaining pre-production devices–we could easily get as many as we needed. After doing some math to understand our capacity needs, we got 40 phones to seed our prototype lab to support per-push automation.
As I mentioned, we were solving the dashboard problem in parallel, and that has now been deployed in the form of Treeherder, which will be the replacement for TBPL. That solves point 3. All that now remains is point 4. We have been hard at work on crafting a unified harness to run the Gaia Javascript tests on device which will also allow us to run the older, existing python tests as well until they can be converted. This gives us the most flexibility and allows us to take advantage of all the automation goodies in the existing python harness–like crash detection, JSON structured logging, etc. Once it is complete, we will be able to run a smaller set of the same tests the developers run locally per each push to b2g-inbound on these Flame devices in our lab. This means that when something breaks, it will break tests that are well known, in a well understood environment, and we can work alongside the developers to understand what broke and why. By enabling the developers and QA to work alongside one another, we eliminate the scaling problem in point 4.
It’s been a very long road to get from zero to where we are today. You can see the early pictures of the “phones on a desk” rack and pictures of the first 20 Flames from Stephen’s presentation he gave earlier this month.
A number of teams helped get us to this point, and it could not have been done without the cooperation among them: the A*Team, the Firefox OS Performance team, the QA team, and the Gaia team all helped get us to where we are today. You can see the per-push tests showing up on the Treeherder Staging site as we ensure we can meet the stability and load requirements necessary for running in production.
Last week, James Lal and his new team inherited this project. They are working hard to push the last pieces to completion as well as expanding it even further. And so, even though Firefox OS has had real phone automation for years, that system is now coming into its own. The real-phone automation will finally be extremely visible and easily actionable for all developers, which is a huge win for everyone involved.
http://clinttalbert.com/2014/08/22/the-odyssey-of-per-push-on-phone-firefox-os-automation/
|
Eric Shepherd: The Sheppy Report: August 22, 2014 |
This week looks slower than usual when you look at this list, but the week involved a lot of research.
Firefox_for_developers
macro be updated to list both newer and older versions of Firefox.
s.So… it was a wildly varied day today. But I got a lot of interesting things done.
http://www.bitstampede.com/2014/08/22/the-sheppy-report-august-22-2014/
|
Gervase Markham: HSBC Weakens Their Internet Banking Security |
From a recent email about “changes to your terms and conditions”. (“Secure Key” is their dedicated keyfob 2-factor solution; it’s currently required both to log in and to pay a new payee. It’s rather well done.)
These changes will also enable us to introduce some enhancements to our service over the coming months. You’ll still have access to the full Internet Banking service by logging on with your Secure Key, but in addition, you’ll also be able log in to a limited service when you don’t use your Secure Key – you’ll simply need to verify your identity by providing other security information we request. We’ll contact you again to let you know when this new feature becomes available to you.
Full details of all the changes can be found below which you should read carefully. If you choose not to accept the changes, you have the right to ask us to stop providing you with the [Personal Internet Banking] service, before they come into effect. If we don’t hear from you, we’ll assume that you accept the changes.
Translation: we are lowering the security we use to protect your account information from unauthorised viewing and, as long as you still want to be able to access your account online at all, there’s absolutely nothing you can do about it.
http://feedproxy.google.com/~r/HackingForChrist/~3/Fu8-Gb2J3UQ/
|
Amy Tsay: What Healthy Relationships Teach Us About Healthy Communities |
In organizations where communities form (whether around a product, mission, or otherwise), there is often a sense of perplexity or trepidation around how to engage with them. What is the proper way to talk to community members? How do I work with them, and what can I do to keep the community healthy and growing? The good news is, if you know what it takes to have a healthy personal relationship, you already know how to build a healthy community.
Prioritize them
In a good relationship, we prioritize the other person. At Mozilla, the QA team makes it a point to respond to volunteer contributors within a day or two. A lack of response is one of the top reasons why people leave online communities, so it’s important not to keep them hanging. It doesn’t feel good to volunteer your time on a project only to be left waiting when you ask questions or request feedback, just as it would if your partner doesn’t return your phone calls.
Be authentic
Authenticity and honesty in a relationship are the building blocks of trust. If you make a mistake, admit it and set it right. Your tone and word choice will reflect your state of mind, so be aware of it when composing a message. When you come from a place of caring and desire to do what’s right for the community, instead of a place of fear or insecurity, your words and actions will foster trust.
Be appreciative
Strong relationships are formed when both parties value and appreciate each other. It’s a great feeling when you take out the trash or do the dishes, and it’s noticed and praised. Make it a ritual to say thanks to community members who make an impact, preferably on the spot, and publicly if possible and appropriate.
Be their champion
Be prepared to go to bat for the community. I was once in a relationship with a partner who would not defend me in situations where I was being mistreated; it didn’t end well. It feels nice to be advocated for, to be championed, and it creates a strong foundation. When you discover a roadblock or grievance, take the time to investigate and talk to the people who can make it right. The community will feel heard and valued.
Empathize
The processes and programs that support community participation require an understanding of motivation. To understand motivation, you have to be able to empathize. Everyone views the world from their own unique perspectives, so it’s important to try and understand them, even if they’re different from your own.
Set expectations
Understand your organization’s limitations, as well as your own, and communicate them. If your partner expects you to be home at a certain time and you don’t show up, the anger you encounter likely has more to do with not being told you’re going to be late, than the lateness itself.
Guidelines and rules for participation are important components as well. I once featured content from a community member and was met by an angry online mob, because although the content was great, the member hadn’t reached a certain level of status. The guidelines didn’t cover eligibility for featuring, and up until then only longer-term participants had been featured, so the community’s expectations were not met.
Not apples to apples
I would never want to get anyone in trouble by suggesting they treat their community members exactly the same as their partners. Answering emails from anyone while having dinner with your loved one is not advised. The take-away is there isn’t any mystery to interacting with a community. Many of the ingredients for a healthy community are ones found in healthy relationships, and most reassuring of all, we already know what they are.
|
Robert Kaiser: Mirror, Mirror: Trek Convention and FLOSS Conferences |
http://home.kairo.at/blog/2014-08/mirror_mirror_trek_convention_and_floss
|
Peter Bengtsson: premailer now with 100% test coverage |
One of my most popular GitHub Open Source projects is premailer. It's a python library for combining HTML and CSS into HTML with all its CSS inlined into tags. This is a useful and necessary technique when sending HTML emails because you can't send those with an external CSS file (or even a CSS style tag in many cases).
The project has had 23 contributors so far and as always people come in get some itch they have scratched and then leave. I really try to get good test coverage and when people come with code I almost always require that it should come with tests too.
But sometimes you miss things. Also, this project was born as a weekend hack that slowly morphed into an actual package and its own repository and I bet there was code from that day that was never fully test covered.
So today I combed through the code and plugged all the holes where there wasn't test coverage.
Also, I set up Coveralls (project page) which is an awesome service that hooks itself up with Travis CI so that on every build and every Pull Request, the tests are run with --with-cover
on nosetests
and that output is reported to Coveralls.
The relevant changes you need to do are:
1) You need to go to coveralls.io (sign in with your GitHub account) and add the repo.
2) Edit your .travis.yml
file to contain the following:
before_install: - pip install coverage ... after_success: - pip install coveralls - coveralls
And you need to execute your tests so that coverage is calculated (the coverage module stores everything in a .coverage
file which coveralls
analyzes and sends). So in my case I change to this:
script: - nosetests premailer --with-cover --cover-erase --cover-package=premailer
3) You must also give coveralls
some clues. So it reports on only the relevant files. Here's what mine looked like:
[run] source = premailer [report] omit = premailer/test*
Now, I get to have a cute "coverage: 100%" badge in the README and when people post pull requests Coveralls will post a comment to reflect how the pull request changes the test coverage.
I am so grateful for all these wonderful tools. And it's all free too!
|
Mozilla WebDev Community: Beer and Tell – August 2014 |
Once a month, web developers from across the Mozilla Project get together to upvote stories on Hacker News from each of our blogs. While we’re together, we usually end up sharing a bit about our side projects over beers, which is why we call this meetup “Beer and Tell”.
There’s a wiki page available with a list of the presenters, as well as links to their presentation materials. There’s also a recording available courtesy of Air Mozilla.
freddyb shared (via a ghost presentation by yours truly) a small webapp he made that shows the current availability of meeting rooms in the Mozilla Berlin office. The app reads room availability from Zimbra, which Mozilla uses for calendaring and booking meeting rooms. It also uses moment.js for rendering relative dates to let you know when a room will be free.
The discussion following the presentation brought up a few similar apps that other Mozilla offices had made to show off their availability, such as the Vancouver office’s yvr-conf-free and the Toronto office’s yyz-conf-free.
nigelb shared (via another ghost presentation, this time split between myself and laura) hgstats, which shows publicly-available graphs of the general health of Mozilla’s mercurial servers. This includes CPU usage, load, swap, and more. The main magic of the app is to load images from graphite, which are publicly visible, while graphite itself isn’t.
nigelb has offered a bounty of beer for anyone who reviews the app code for him.
Pomax shared an early preview of Inkcyclopedia, an online encyclopedia of ink colors. Essentially, Pomax bought roughly 170 different kinds of ink, wrote down samples with all of them, photographed them, and then collected those images along with the kind of ink used for each. Once finished, the site will be able to accept user-submitted samples and analyze them to attempt to identify the color and associate it with the ink used. Unsurprisingly, the site is able to do this using the RGBAnalyse library that Pomax shared during the last Beer and Tell, in tandem with RgbQuant.js.
gsathya shared a screencast showing off a project that has one browser window running a WebGL game and sharing its screen with another browser window via WebRTC. The demo currently uses Chrome’s desktopCapture API for recording the screen before sending it to the listener over WebRTC.
Alas, we were unable to beat Hacker News’s voting ring detection. But at least we had fun!
If you’re interested in attending the next Beer and Tell, sign up for the dev-webdev@lists.mozilla.org mailing list. An email is sent out a week beforehand with connection details. You could even add yourself to the wiki and show off your side-project!
See you next month!
https://blog.mozilla.org/webdev/2014/08/21/beer-and-tell-august-2014/
|