Byron Jones: happy bmo push day! |
the following changes have been pushed to bugzilla.mozilla.org:
discuss these changes on mozilla.tools.bmo.
http://globau.wordpress.com/2014/10/16/happy-bmo-push-day-115/
|
Jennie Rose Halperin: New /contribute page |
In an uncharacteristically short post, I want to let folks know that we just launched our new /contribute page.
I am so proud of our team! Thank you to Jess, Ben, Larissa, Jen, Rebecca, Mike, Pascal, Flod, Holly, Sean, David, Maryellen, Craig, PMac, Matej, and everyone else who had a hand. You all are the absolute most wonderful people to work with and I look forward to seeing what comes next!
I’ll be posting intermittently about new features and challenges on the site, but I first want to give a big virtual hug to all of you who made it happen and all of you who contribute to Mozilla in the future.
|
David Boswell: Investing more in community building |
I’m very excited to see the new version of Mozilla’s Get Involved page go live. Hundreds of people each week come to this page to learn about how they can volunteer. Improvements to this page will lead to more people making more of an impact on Mozilla’s projects.
This page has a long history—this page existed on www.mozilla.org when Mozilla launched in 1998 and it has been redesigned a few times before. There is something different about the effort this time though.
We’ve spent far more time researching, prototyping, designing, testing, upgrading and building than ever before. This reflects Mozilla’s focus this year of enabling communities that have impact and that goal has mobilized experts from many teams who have made the experience for new volunteers who use this page much better.
Mozilla’s investment in community in 2014 is showing up in other ways too, including a brand new contribution dashboard, a relaunched contributor newsletter, a pilot onboarding program, the first contributor research effort in three years and much more.
All of these pieces are coming together and will give us a number of options for how we can continue and increase the investment in community in 2015. Look for more thoughts soon on why that is important, what that could look like and how you could help shape it.
http://davidwboswell.wordpress.com/2014/10/15/investing-more-in-community-building/
|
Mozilla Open Policy & Advocacy Blog: Spotlight on Amnesty International: A Ford-Mozilla Open Web Fellows Host |
{This is the second installment in our series highlighting the 2015 Host Organizations for the Ford-Mozilla Open Web Fellows program. We are now accepting applications to be a 2015 fellow. Amnesty International is a great addition to the program, especially as new technologies have such a profound impact – both positive and negative – on human rights. With its tremendous grassroots advocacy network and decades of experience advocating for fundamental human rights, Amnesty International, its global community and its Ford-Mozilla Fellow are poised to continue having impact on shaping the digital world for good.}
Spotlight on Amnesty International: A Ford-Mozilla Open Web Fellow Host
By Tanya O’Carroll, Project Officer, Technology and Human Rights, Amnesty International
For more than fifty years Amnesty International has campaigned for human rights globally: exposing information that governments will go to extreme measures to hide; connecting individuals who are under attack with solidarity networks that span the globe; fighting for policy changes that often seem impossible at first.
We’ve developed many tools and tactics to help us achieve change.
But the world we operate in is also changing.
Momentous developments in information and communications networks have introduced new opportunities and threats to the very rights we defend.
The Internet has paved the way for unprecedented numbers of people to exercise their rights online, crucially freedom of expression and assembly.
The ability for individuals to publish information and content in real-time has created a new world of possibilities for human rights investigations globally. Today, we all have the potential to act as witnesses to human rights violations that once took place in the dark.
Yet large shadows loom over the free and open Web. Governments are innovating and seeking to exploit new tools to tighten their control, with daunting implications for human rights.
This new environment requires specialist skills to respond. When we challenge the laws and practices that allow governments to censor individuals online or unlawfully interfere with their privacy, it is vital that we understand the mechanics of the Internet itself–and integrate this understanding in our analysis of the problem and solutions.
That’s why we’re so excited to be an official host for the Ford-Mozilla Open Web Fellowship.
We are seeking someone with the expert skill set to help shape our global response to human rights threats in the digital age.
Amnesty International’s work in this area builds on our decades of experience campaigning for fundamental human rights.
Our focus is on the new tools of control – that is the technical and legislative tools that governments are using to clamp down on opposition, restrict lawful expression and the free flow of information and unlawfully spy on private communications on a massive scale.
In 2015 we will be actively campaigning for an end to unlawful digital surveillance and for the protection of freedom of expression online in countries across the world.
Amnesty International has had many successes in tackling entrenched human rights violations. We know that as a global movement of more than 3 million members, supporters and activists in more than 150 countries and territories we can also help to protect the ideal of a free and open web. Our success will depend on building the technical skills and capacities that will keep us ahead of government efforts to do just the opposite.
Demonstrating expert leadership, the fellow will contribute their technical skills and experience to high-quality research reports and other public documents, as well as international advocacy and public campaigns.
If you are passionate about stopping the Internet from becoming a weapon that is used for state control at the expense of freedom, apply now to become a Ford-Mozilla Open Web Fellow and join Amnesty International in the fight to take back control.
Apply to be a 2015 Ford-Mozilla Open Web Fellow. Visit www.mozilla.org/advocacy.
|
Doug Belshaw: Web Literacy Map 2.0 community calls |
To support the development of Web Literacy Map v2.0, we’re going to host some calls with the Mozilla community.
There is significant overlap between the sub-section of the community interested in the Web Literacy Map and the sub-section involved in the Badge Alliance working group on Digital/Web Literacies. It makes sense, therefore, to use the time between cycles of the Badge Alliance working group to focus on developing the Web Literacy Map.
We’ll have a series of seven community calls on the following dates. The links take you to the etherpad for that call.
You can subscribe to a calendar for these calls at the link below:
Calendar: http://bit.ly/weblitmap2-calls
We’ll be using the #TeachTheWeb forum for asynchronous discussion. I do hope you’ll be able to join us!
Questions? Comments? Direct them to @dajbelshaw / doug@mozillafoundation.org
|
Robert Nyman: How to become efficient at e-mail |
For many years I’ve constantly been receiving hundreds of e-mails every day. A lot of them work-related, a number of them personal. And I’ve never seen this is an issue, since I have an approach that works for me.
Many people complain that e-mail is broken, but I think it’s a great communication form. I can deal with it when I feel I have the time and can get back to people when it suits me. If I need to concentrate on something else, I won’t let it interrupt my flow – just have notifications off/e-mail closed/don’t read it, and then get to it while you can.
Your miles might, and will, vary, of course, but here are the main things that have proven to work very well for me.
When you open up your Inbox with all new e-mail, deal with it. Now and then. Because having seen the e-mail, maybe even glanced at some of the contents beyond the subjects as well, I believe it has already reserved a mental part of your brain. You’ll keep on thinking about it till you actually deal with it.
In some cases, naturally it’s good to ponder your reply, but mostly, just go with your knowledge and act on it. Some things are easiest to deal with directly, some need a follow-up later on (more on that in Flags and Filters below).
Utilize different flags for various actions you want. Go through your Inbox directly, reply to the e-mails or flag them accordingly. It doesn’t have to be Inbox Zero or similar, but just that you know and are on top of each and every e-mail.
These are the flags/labels I use:
The rest of it is Throw away. No need to act, watch or file it? Get rid of it.
Getting e-mails from the same sender/on the same topic on a regular basis? Set up a filter. This way you can have the vast majority of e-mail already sorted for you, bypassing the Inbox directly.
Make them go into predefined folders (or Gmail labels) per sender/topic. That way you can see in the structure that you have unread e-mails from, say, LinkedIn, Mozilla, Netflix, Facebook, British Airways etc. Or e-mails from your manager or the CEO. Or e-mail sent to the team mailing list, company branch or all of the company. And then deal with it when you have the time.
Gmail also has this nice feature of choosing to only show labels in the left hand navigation if they have unread e-mails in them, making it even easier to see where you’ve got new e-mails.
Let me stress that this is immensely useful, drastically reducing which e-mails you need to manually filter and decide an action for.
If you have a busy period when replying properly is hard, still make sure to take the time to acknowledge people. Reply, say that you’ve seen their e-mail and that you will get back to them as soon as you have a chance.
They took the time to write to you, and they respect the common decency of a reply.
How many newsletters or information e-mails are you getting that you don’t really care about? Maybe on a monthly basis, so it’s annoying, but not annoying enough? Apply the above suggestion filters with them or, even better, start unsubscribing from crap you don’t want.
Whether you use an e-mail client, Gmail or similar, make sure to learn its features. Keyboard shortcuts, filters and any way you can customize it to make you more efficient.
For instance, I’ve set up keyboard shortcuts for the above mentioned flags and for moving e-mails into pre-defined folders. Makes the manual part of dealing with e-mail really fast.
E-mail doesn’t have to be bad. On the contrary, it can be extremely powerful and efficient if you just make the effort to streamline the process and use it for you, not against you.
E-mails aren’t a problem, they’re an opportunity.
|
Kent James: Thunderbird Summit in Toronto to Plan a Viable Future |
On Wednesday, October 15 through Saturday, October 19, 2014, the Thunderbird core contributors (about 20 people in total) are gathering at the Mozilla offices in Toronto, Ontario for a key summit to plan a viable future for Thunderbird. The first two days are project work days, but on Friday, October 18 we will be meeting all day as a group to discuss how we can overcome various obstacles that threaten the continuing viability of Thunderbird as a project. This is an open Summit for all interested parties. Remote participation or viewing of Friday group sessions is possible, beginning at 9:30 AM EDT (6:30 AM Pacific Daylight Time) using the same channels as the regular weekly Thunderbird status meetings.
Video Instructions: See https://wiki.mozilla.org/Thunderbird/StatusMeetings for details.
Overall Summit Description and Agenda: See https://wiki.mozilla.org/Thunderbird:Summit_2014
Feel free to join in if you are interested in the future of Thunderbird.
|
J. Ryan Stinnett: DevTools for Firefox OS browser tabs |
We've had various tools for inspecting apps on remote devices for some time now, but for a long time we've not had the same support for remote browser tabs.
To remedy this, WebIDE now supports inspecting browser tabs running on Firefox OS devices.
A few weeks back, WebIDE gained support for inspecting tabs on the remote device, but many of the likely suspects to connect to weren't quite ready for various reasons.
We've just landed the necessary server-side bits for Firefox OS, so you should be able try this out by updating your device to the next nightly build after 2014-10-14.
After connecting to your device in WebIDE, any open browser tabs will appear at the bottom of WebIDE's project list.
The toolbox should open automatically after choosing a tab. You can also toggle the toolbox via the "Pause" icon in the top toolbar.
We're planning to make this work for Firefox for Android as well. Much of that work is already done, so I am hopeful that it will be available soon.
If there are features you'd like to see added, file bugs or contact the team via various channels.
|
Andreas Gal: OpenH264 Now in Firefox |
The Web is an open ecosystem, generally free of proprietary control and technologies—except for video.
Today in collaboration with Cisco we are shipping support for H.264 in our WebRTC implementation. Mozilla has always been an advocate for an open Web without proprietary controls and technologies. Unfortunately, no royalty-free codec has managed to get enough adoption to become a serious competitor to H.264. Mozilla continues to support the VP8 video format, but we feel that VP8 has failed to gain sufficient adoption to replace H.264. Firefox users are best served if we offer a video codec in WebRTC that maximises interoperability, and since much existing telecommunication infrastructure uses H.264 we think this step makes sense.
The way we have structured support for H.264 with Cisco is quite interesting and noteworthy. Because H.264 implementations are subject to a royalty bearing patent license and Mozilla is an open source project, we are unable to ship H.264 in Firefox directly. We want anyone to be able to distribute Firefox without paying the MPEG LA.
Instead, Cisco has agreed to distribute OpenH264, a free H.264 codec plugin that Firefox downloads directly from Cisco. Cisco has published the source code of OpenH264 on Github and Mozilla and Cisco have established a process by which the binary is verified as having been built from the publicly available source, thereby enhancing the transparency and trustworthiness of the system.
OpenH264 is not limited to Firefox. Other Internet-connected applications can rely on it as well.
Here is how Jonathan Rosenberg, Cisco’s Chief Technology Officer for Collaboration, described today’s milestone: “Cisco is excited to see OpenH264 become available to Firefox users, who will then benefit from interoperability with the millions of video communications devices in production that support H.264”.
We will continue to work on fully open codecs and alternatives to H.264 (such as Daala), but for the time being we think that OpenH264 is a significant victory for the open Web because it allows any Internet-connected application to use the most popular video format. And while OpenH264 is not truly open, at least it is the most open widely used video codec.
Note: Firefox currently uses OpenH264 only for WebRTC and not for the
|
Doug Belshaw: Some interesting feedback from the Web Literacy Map 2.0 community survey |
Last week we at Mozilla launched a community survey containing five proposals for Web Literacy Map v2.0. I don’t want to share how positively or negatively the overall sentiment is for each proposal as the survey is still open. However, I do want to pull out some interesting comments we’ve seen so far.
There’s really good points to be made for and against each of the proposals - as the following (anonymized) examples demonstrate. While I’d like to share the whole spreadsheet, there’s people’s contact details on there, and I haven’t asked them if I can share their feedback with names attached.
What I’ve done here - and I guess you’ll have to trust me on this - is to try and give examples that show the range of feedback we’re getting.
The map can be about putting our manifesto and principles into practice. It’s a way to teach Mozilla’s beliefs.
I think this would put some people off using the map as an educational resource as they will think it has some political angle to it.
100% yes. The manifesto needs to be much more broadly spread - it is an inviting and inclusive document by nature and it is important that people engaging in our vision of web literacy understand the context from which we speak.
I often present the main ideas from the Manifesto when introducing the map and Webmaker. This aligns with my teaching practice.
While I like the manifesto, I don’t think the Web Literacy Map should be tied to it. I think that might decrease the likelihood of partners feeling ownership over it.
I think it is important for Mozilla to embrace its output – we shouldn’t shy away from taking credit for the things we work so hard to produce. But I do not believe Mozilla should try to achieve mission alignment as part of the literacy map: Literacy is a tool that helps people decide for themselves what to believe, and disagreement with Mozilla’s manifesto is a valid result of that.
Not sure if it needs to reference the manifesto, if the principles are followed when needed they would be implicit?
Definitely easier to understand off the bat.
No. Exploring, Building and Connecting are better descriptions.
Reading is not navigating. Writing is not building (necessarily). Communicating is more then participating.
Kinda torn on this. A lot of the time when literacy people from schools of education take over, they come up with weaker definitions of reading and writing, and I like the existing descriptions. But at the same time, R/W/P might make it more appealing for those folks, and trojan-horse them into using stronger standards.
Reading, writing, participating sounds more like school which is a turn off for many.
There’s a lot more than reading and writing going on.
I think reading and writing are too limited in their understood meanings and prefer the existing terms exploring and building. I prefer participating as a term over connecting.
Naw. As I said before, while it might help visualize the connections, it could quickly become a plate of spaghetti that’s not approachable or user friendly. But – there’s no reason there couldn’t be a map tool for exploring the things on the side.
I think it would seem more accessible as a map and people will stay connected/interested for longer.
There should be an easy to read way to see all of the map. It’s not so important what that looks like, although having some more map-like versions of it is interesting.
A list is not good enough and it’s necessary to show off relation between the various skills and competencies. But a true interactive map is maybe a bit to much.
It should look like whatever it needs to look like for people to understand it. If “Map” is causing confusion, rename it rather than change the form to match the name.
I like this idea a lot. It could even have “routes” like the pathways tool.
But you should provide both options - we all interpret data differently, have preferred means of reading information, so leave the list style for those who think better that way, and the map for those who take a more graphic-based approach.
Even if they’re included solely as reference points or suggested teaching angles, having them in there strengthens the entire project.
I think adding cross-cutting themes (like the vertical themes at Mozfest) will be quite confusing for some people.
Yeah, I think that’s a good way to deal with those. They’re useful as themes, and this keeps them from overpowering the track structure in the way I was worried they would. Good work!
Well if you introduce the readers to many *new* terms (that may be new to them) you risk to confuse them and they end up missing the content in the map.
An idea I discussed with Doug was concepts that could act as lens through which to view the web literacy map (i.e., mobile or digital citizenship). I support the idea of demonstrating how remix the map with cross-cutting ideas but I think the ideas should be provided as examples and users should also bring their own concepts forward.
Agreed. There are these larger themes or elements across the map. I’d be interested to see how these are represented. Perhaps this is the focus between cycles of the working group.
The problem here is that there are many other themes that could be added. Perhaps these are better emphasised in resources and activities at the point of learning rather than in the map itself?
I’d love to see a remix button, but one that integrated with GitHub to include proper historical attribution and version control. Simply spinning up multiple HTML pages without knowing who changed what when would cause more confusion I think.
Only a basic level of skill is needed to fork a repo on GitHub and edit text files directly on the website. We could provide guidelines on how to do that for people who want to contribute.
Definitely! Some of the things on the map now is strictly no-go in my context, I would love to have the ability to Remix to better match my needs.
In some contexts remixing it would be good to relate to situations better.
Perhaps if the name is required to be changed. But much better to get these people to help make the core map work for them.
Agree in principle but not many people will do this. I wouldn’t make it a high priority. Those who like to remix will always find a way.
Completely torn on this one. On the one hand it would embody the open principles on which both the map and Mozilla is built. It is also useful to be able to adapt tools for contexts. However, it could also potentially lead to mixed messages and dilution of the core ‘literacy’ principles that are in the map.
|
Gregory Szorc: Robustly Testing Version Control at Mozilla |
Version control services and interaction with them play an important role at any company. Despite version control being a critical part of your infrastructure, my experience from working at a few companies and talking with others is that version control often doesn't get the testing love that other services do. Hooks get written, spot-tested by the author, and deployed. Tools that interact with version control often rely on behavior that may or may not change over time, especially when the version of your version control software is upgraded.
We've seen this pattern at Mozilla. Mercurial hooks and extensions were written and deployed to the server without test coverage. As a result, things break when we try to upgrade the server. This happens a few times and you naturally develop an attitude of fear, uncertainty, and doubt around touching anything on the server (or the clients for that matter). If it isn't broken, why fix it prevails for months or years. Then one an enthusiastic individual comes around wanting to deploy some hot new functionality. You tell them the path is arduous because the server is running antiquated versions of software and nothing is tested. The individual realizes the amazing change isn't worth the effort and justifiably throws up their hands and gives up. This is almost a textbook definition of how not having test coverage can result in technical debt. This is the position Mozilla is trying to recover from.
One of the biggest impacts I've had since joining the Developer Services Team at Mozilla a little over a month ago has been changing the story about how we test version control at Mozilla.
I'm proud to say that Mozilla now has a robust enough testing infrastructure in place around our Mercurial server that we're feeling pretty good about silencing the doubters when it comes to changing server behavior. Here's how we did it.
The genesis of this project was likely me getting involved with the hg-git and Mercurial projects. For hg-git, I learned a bit about Mercurial internals and how extensions work. When I looked at Mercurial extensions and hooks used by Mozilla, I started to realize what parts were good and what parts were bad. I realized what parts would likely break after upgrades. When I started contributing patches to Mercurial itself, I took notice of how Mercurial is tested. When I discovered T Tests, I thought, wow, that's pretty cool: we should use them to test Mozilla's Mercurial customizations!
After some frustrations with Mercurial extensions breaking after Mercurial upgrades, I wanted to do something about it to prevent this from happening again. I'm a huge fan of unified repositories. So earlier this year, I reached out to the various parties who maintain all the different components and convinced nearly everyone that establishing a single repository for all the version control code was a good idea. The version-control-tools repository was born. Things were slow at first. It was initially pretty much my playground for hosting Mercurial extensions that I authored. Fast forward a few months, and the version-control-tools repository now contains full history imports of our Mercurial hooks that are deployed on hg.mozilla.org, the templates used to render HTML on hg.mozilla.org, and pretty much every Mercurial extension authored by Mozillians, including pushlog. Having all the code in one repository has been very useful. It has simplified server deployments: we now pull 1 repository instead of 3. If there is a dependency between different components, we can do the update atomically. These are all benefits of using a single repository instead of N>1.
While version-control-tools was still pretty much my personal playground, I introduced a short script for running tests. It was pretty basic: just find test files and invoke them with Mercurial's test harness. It served my needs pretty well. Over time, as more and more functionality was rolled into version-control-tools, we expanded the scope of the test harness.
We can now run Python unit tests (in addition to Mercurial .t tests). Test all of the things!
We set up continuous integration with Jenkins so tests run after check-in and alert us when things fail.
We added code coverage so we can see what is and isn't being tested. Using code coverage data, we've identified a server upgrade bug before it happens. We're also using the data to ensure that code is tested as thoroughly as it needs to be. The code coverage data has been invaluable at assessing the quality of our tests. I'm still shocked that Firefox developers tolerate not having JavaScript code coverage when developing Firefox features. (I'm not saying code coverage is perfect, merely that it is a valuable tool in your arsenal.)
We added support for running tests against multiple versions of Mercurial. We even test the bleeding edge of Mercurial so we know when an upstream Mercurial change breaks our code. So, no more surprises on Mercurial release day. I can tell you today that we have a handful of extensions that are broken in Mercurial 3.2, due for release around November 1. (Hopefully we'll fix them before release.)
We have Vagrant configurations so you can start a virtual machine that runs the tests the same way Jenkins does.
The latest addition to the test harness is the ability to spin up Docker containers as part of tests. Right now, this is limited to running Bugzilla during tests. But I imagine the scope will only increase over time.
Before I go on, I want to quickly explain how amazing Mercurial's .t tests are. These are a flavor of tests used by Mercurial and the dominant form of new tests added to the version-control-tools repository. These tests are glorified shell scripts annotated with expected command output and other metadata. It might be easier to explain by showing. Take bzpost's tests as an example. The bzpost extension automatically posts commit URLs to Bugzilla during push. Read more if you are interested. What I like so much about .t tests is that they are actually testing the user experience. The test actually runs hg push and verifies the output is exactly what is intended. Furthermore, since we're running a Dockerized Bugzilla server during the test, we're able to verify that the bzpost extension actually resulted in Bugzilla comments being added to the appropriate bug(s). Contrast this with unit tests that only test a subset of functionality. Or, contrast with writing a lot of boilerplate and often hard-to-read code that invokes processes and uses regular expressions, etc to compare output. I find .t tests are more concise and they do a better job of testing user experience. More than once I've written a .t test and thought this user experience doesn't feel right, I should change the behavior to be more user friendly. This happened because I was writing actual end-user commands as part of writing tests and seeing the exact output the user would see. It is much harder to attain this sense of understanding when writing unit tests. I can name a few projects with poor command line interfaces that could benefit from this approach... I'm not saying .t tests are perfect or that they should replace other testing methodologies such as unit tests. I just think they are very useful for accurately testing higher-level functionality and for assessing user experience. I really wish we had these tests for mach commands...
Anyway, with a proper testing harness in place for our version control code, we've been pretty good about ensuring new code is properly tested. When people submit new hooks or patches to existing hooks, we can push back and refuse to grant review unless tests are included. When someone requests a new deployment to the server, we can look at what changed, cross-reference to test coverage, and assess the riskiness of the deployment. We're getting to the point where we just trust our tests and server deployments are minor events. Concerns over accidental regressions due to server changes are waning. We can tell people if you really care about this not breaking, you need a test and if you add a test, we'll support it for you. People are often more than happy to write tests to ensure them peace of mind, especially when that test's presence shifts maintenance responsibility away from them. We're happy because we don't have many surprises (and fire drills) at deployment time. It's a win-win!
So, what's next? Good question! We still have a number of large gaps in our test coverage. Our code to synchronize repositories from the master server to read-only slaves is likely the most critical omission. We also don't yet have a good way of reproducing our server environment. Ideally, we'd run the continuous integration in an environment that's very similar to production. Same package versions and everything. This would also allow us to simulate the actual hg.mozilla.org server topology during tests. Currently, our tests are more unit-style than integration-style. We rely on the consistent behavior of Mercurial and other tools as sufficient proxies for test accuracy and we back those up with running the tests on the staging server before production deployment. But these aren't a substitute for an accurate reproduction of the production servers, especially when it comes to things like the replication tests. We'll get there some day. I also have plans to improve Mercurial's test harness to better facilitate some of our advanced use cases. I would absolutely love to make Mercurial's .t test harness more consumable outside the context of Mercurial. (cram is one such attempt at this.) We also need to incorporate the Git server code into this repository. Currently, I'm pretty sure everything Git at Mozilla is untested. Challenge accepted!
In summary, our story for testing version control at Mozilla has gone from a cobbled together mess to something cohesive and comprehensive. This has given us confidence to move fast without breaking things. I think the weeks of people time invested into improving the state of testing was well spent and will pay enormous dividends going forward. Looking back, the mountain of technical debt now looks like a mole hill. I feel good knowing that I played a part in making this change.
http://gregoryszorc.com/blog/2014/10/14/robustly-testing-version-control-at-mozilla
|
Jordan Lund: This week in Releng - Oct 5th, 2014 |
Major highlights:
Completed work (resolution is 'FIXED'):
http://jordan-lund.ghost.io/this-week-in-releng-oct-5th-2014/
|
Allison Naaktgeboren: Applying Privacy Series: Introduction |
Introduction
In January, I laid out information in a presentation & blog post information for a discussion about applying Mozilla’s privacy principles in practice to engineering. Several fellow engineers wanted to see it applied in a concrete example, complaining that the material presented was too abstract to be actionable. This is a fictional series of conversations around the concrete development of a fictional mobile app feature. Designing and building software is a process of evolving and refining ideas, and this example is designed for engineers to understand actionable privacy and data safety concerns can and should be a part of the development process.
Disclaimer
The example is fictional. Any resemblance to any real or imagined feature, product, service, or person is purely accidental. Some technical statements to flesh out the fictional dialogues. They are assumed to only apply to this fictional feature of a fictional mobile application. The architecture might not be production-quality. Don’t get too hung up on it, it’s a fictional teaching example.
Thank You!
Before I begin, a big thank you to Stacy Martin, Alina Hua, Dietrich Ayala, Matt Brubeck, Mark Finkle, Joe Stevenson, and Sheeri Cabral for their input on this series of posts.
The Cast of Characters
so fictional they don’t even get real names
Fictional Problem Setup
Imagine that the EU provides a free service to all residents that will translate English text to one of the EU’s supported languages. The service requires the target language and the device Id. It is however, rather slow.
For the purposes of this fictional example, the device id is a hard coded number on each computer, tablet, or phone. It is globally unique and unchangeable, and so highly identifiable.
A mobile application team wants to use this service to offer translation in page (a much desired feature non-English speakers) to EU residents using their mobile app. For non-english readers, the ability to read the app’s content in their own language is a highly desired feature.
After some prototyping & investigation, they determine that the very slow speed of the translation service adversely affects usability. They’d still like to use it, so they decide to evolve the feature. They’d also like to translate open content while the device is offline so the translated content comes up quicker when the user reopens the app.
Every New Feature Starts Somewhere
Engineer sees announcement in tech press about the EU’s new service and its noble goal of overcoming language barriers on the web for its citizens. She sends an email to her team’s public mailing list “wouldn’t it be cool apply this to our content for users instead of them having to copy/paste blocks of text into an edit box? We have access to those values on the phone already”
Engineering Team, Engineering Manager & Product Manager on the thread are enthusiastic about the idea. Engineering Manager assigns Engineer to make it happen.
She schedules the initial meeting to figure out what the heck that actually means and nail down a specification.
http://www.allisonnaaktgeboren.com/applyingprivacyseriesintroduction/
|
Robert O'Callahan: Back In New Zealand |
I just finished a three-week stint in North America, mostly a family holiday but some work too. Some highlights:
Movie picoreviews:
Edge Of Tomorrow: Groundhog Day meets Starship Troopers. Not as good as Groundhog Day but pretty good.
X-Men: Days Of Future Past: OK.
Godzilla: OK if your expectations are set appropriately.
Dawn Of The Planet Of The Apes: watched without sound, which more or less worked. OK.
Amazing Spider-Man 2: Bad.
Se7en: Good.
http://robert.ocallahan.org/2014/10/back-in-new-zealand.html
|
Gregory Szorc: Deterministic and Minimal Docker Images |
Docker is a really nifty tool. It vastly lowers the barrier to distributing and executing applications. It forces people to think about building server side code as a collection of discrete applications and services. When it was released, I instantly realized its potential, including for uses it wasn't primary intended for, such as applications in automated build and test environments.
Over the months, Docker's feature set has grown and many of its shortcomings have been addressed. It's more usable than ever. Most of my early complaints and concerns have been addressed or are actively being addressed.
But one supposedly solved part of Docker still bothers me: image creation.
One of the properties that gets people excited about Docker is the ability to ship execution environments around as data. Simply produce an image once, transfer it to a central server, pull it down from anywhere, and execute. That's pretty damn elegant. I dare say Docker has solved the image distribution problem. (Ignore for a minute that the implementation detail of how images map to filesystems still has a few quirks to work out. But they'll solve that.)
The ease at which Docker manages images is brilliant. I, like many, was overcome with joy and marvelled at how amazing it was. But as I started producing more and more images, my initial excitement turned to frustration.
The thing that bothers me most about images is that the de facto and recommended method for producing images is neither deterministic nor results in minimal images. I strongly believe that the current recommended and applied approach is far from optimal and has too many drawbacks. Let me explain.
If you look at the Dockerfiles from the official Docker library (examples: Node, MySQL), you notice something in common: they tend to use apt-get update as one of their first steps. For those not familiar with Apt, that command will synchronize the package repository indexes with a remote server. In other words, depending on when you run the command, different versions of packages will be pulled down and the result of image creation will differ. The same thing happens when you clone a Git repository. Depending on when you run the command - when you create the image - you may get different output. If you create an image from scratch today, it could have a different version of say Python than it did the day before. This can be a big deal, especially if you are trying to use Docker to accurately reproduce environments.
This non-determinism of building Docker images really bothers me. It seems to run counter to Docker's goal of facilitating reliable environments for running applications. Sure, one person can produce an image once, upload it to a Docker Registry server, and have others pull it. But there are applications where independent production of the same base image is important.
One area is the security arena. There are many people who are justifiably paranoid about running binaries produced by others and pre-built Docker images set off all kinds of alarms. So, these people would rather build an image from source, from a Dockerfile, than pull binaries. Except then they build the image from a Dockerfile and the application doesn't run because of an incompatibility with a new version of some random package whose version wasn't pinned. Of course, you probably lost numerous hours tracing down this obscure reason. How frustrating! Determinism and verifiability as part of Docker image creation help solve this problem.
Deterministic image building is also important for disaster recovery. What happens if your Docker Registry and all hosts with copies of its images go down? If you go to build the images from scratch again, what guarantee do you have that things will behave the same? Without determinism, you are taking a risk that things will be different and your images won't work as intended. That's scary. (Yes, Docker is no different here from existing tools that attempt to solve this problem.)
What if your open source product relies on a proprietary component that can't be legally distributed? So much for Docker image distribution. The best you can do is provide a base image and instructions for completing the process. But if that doesn't work deterministically, your users now have varying Docker images, again undermining Docker's goal of increasing consistency.
My other main concern about Docker images is that they tend to be large, both in size and in scope. Many Docker images use a full Linux install as their base. A lot of people start with a base e.g. Ubuntu or Debian install, apt-get install the required packages, do some extra configuration, and call it a day. Simple and straightforward, yes. But this practice makes me more than a bit uneasy.
One of the themes surrounding Docker is minimalism. Containers are lighter than VMs; just ship your containers around; deploy dozens or hundreds of containers simultaneously; compose your applications of many, smaller containers instead of larger, monolithic ones. I get it and am totally on board. So why are Docker images built on top of the bloaty excess of a full operating system (modulo the kernel)? Do I really need a package manager in my Docker image? Do I need a compiler or header files so I can e.g. build binary Python extensions? No, I don't, thank you.
As a security-minded person, I want my Docker images to consist of only the files they need, especially binary files. By leaving out non-critical elements from your image and your run-time environment, you are reducing the surface area to attack. If your application doesn't need a shell, don't include a shell and don't leave yourself potentially vulnerable to shellshock. I want the attacker who inevitably breaks out of my application into the outer container to get nothing, not something that looks like an operating system and has access to tools like curl and wget that could potentially be used to craft a more advanced attack (which might even be able to exploit a kernel vulnerability to break out of the container). Of course, you can and should pursue additional security protections in addition to attack surface reduction to secure your execution environment. Defense in depth. But that doesn't give Docker images a free pass on being bloated.
Another reason I want smaller containers is... because they are smaller. People tend to have relatively slow upload bandwidth. Pushing Docker images that can be hundreds of megabytes clogs my tubes. However, I'll gladly push 10, 20, or even 50 megabytes of only the necessary data. When you factor in that Docker image creation isn't deterministic, you also realize that different people are producing different versions of images from the same Dockerfiles and that you have to spend extra bandwidth transferring the different versions around. This bites me all the time when I'm creating new images and am experimenting with the creation steps. I tend to bypass the fake caching mechanism (fake because the output isn't deterministic) and this really results in data explosion.
I understand why Docker images are neither deterministic nor minimal: making them so is a hard problem. I think Docker was right to prioritize solving distribution (it opens up many new possibilities). But I really wish some effort could be put into making images deterministic (and thus verifiable) and more minimal. I think it would make Docker an even more appealing platform, especially for the security conscious. (As an aside, I would absolutely love if we could ship a verifiable Firefox build, for example.)
These are hard problems. But they are solvable. Here's how I would do it.
First, let's tackle deterministic image creation. Despite computers and software being ideally deterministic, building software tends not to be, so deterministic image creation is a hard problem. Even tools like Puppet and Chef which claim to solve aspects of this problem don't do a very good job with determinism. Read my post on The Importance of Time on Machine Provisioning for more on the topic. But there are solutions. NixOS and the Nix package manager have the potential to be used as the basis of a deterministic image building platform. The high-level overview of Nix is that the inputs and contents of a package determine the package ID. If you know how Git or Mercurial get their commit SHA-1's, it's pretty much the same concept. In theory, two people on different machines start with the same environment and bootstrap the exact same packages, all from source. Gitian is a similar solution. Although I prefer Nix's content-based approach and how it goes about managing packages and environments. Nix feels so right as a base for deterministically building software. Anyway, yes, fully verifiable build environments are turtles all the way down (I recommend reading Tor's overview of the problem and their approach. However, Nix's approach addresses many of the turtles and silences most of the critics. I would absolutely love if more and more Docker images were the result of a deterministic build process like Nix. Perhaps you could define the full set of packages (with versions) that would be used. Let's call this the package manifest. You would then PGP sign and distribute your manifest. You could then have Nix step through all the dependencies, compiling everything from source. If PGP verification fails, compilation output changes, or extra files are needed, the build aborts or issues a warning. I have a feeling the security-minded community would go crazy over this. I know I would.
OK, so now you can use Nix to produce packages (and thus images) (more) deterministically. How do you make them minimal? Well, instead of just packaging the entire environment, I'd employ tools like makejail. The purpose of makejail is to create minimal chroot jail environments. These are very similar to Docker/LXC containers. In fact, you can often take a tarball of a chroot directory tree and convert it into a Docker container! With makejail, you define a configuration file saying among other things what binaries to run inside the jail. makejail will trace file I/O of that binary and copy over accessed files. The result is an execution environment that (hopefully) contains only what you need. Then, create an archive of that environment and pipe it into docker build to create a minimal Docker image.
In summary, Nix provides you with a reliable and verifiable build environment. Tools like makejail pair down the produced packages into something minimal, which you then turn into your Docker image. Regular people can still pull binary images, but they are much smaller and more in tune with Docker's principles of minimalism. The paranoid among us can produce the same bits from source (after verifying the inputs look credible and waiting through a few hours of compiling). Or, perhaps the individual files in the image could be signed and thus verified via trust somehow? The company deploying Docker can have peace of mind that disaster scenarios resulting in Docker image loss should not result in total loss of the image (just rebuild it exactly as it was before).
You'll note that my proposed solution does not involve Dockerfiles as they exist today. I just don't think Dockerfile's design of stackable layers of commands is the right model, at least for people who care about determinism and minimalism. You really want a recipe that knows how to create a set of relevant files and some metadata like what ports to expose, what command to run on container start, etc and turn that into your Docker image. I suppose you could accomplish this all inside Dockerfiles. But that's a pretty radical departure from how Dockerfiles work today. I'm not sure the two solutions are compatible. Something to think about.
I'm pretty sure of what it would take to add deterministic and verifiable building of minimal and more secure Docker images. And, if someone solved this problem, it could be applicable outside of Docker (again, Docker images are essentially chroot environments plus metadata). As I was putting the finishing touches on this article, I discovered nix-docker. It looks very promising! I hope the Docker community latches on to these ideas and makes deterministic, verifiable, and minimal images the default, not the exception.
http://gregoryszorc.com/blog/2014/10/13/deterministic-and-minimal-docker-images
|
Mozilla Release Management Team: Firefox 33 rc1 to rc2 |
A important last change forced us to generate a build 2 of Firefox 33. We took this opportunity to backout an OMTC-related regression and two startup fixes on fennec.
Extension | Occurrences |
cpp | 10 |
java | 5 |
h | 3 |
build | 1 |
Module | Occurrences |
security | 12 |
mobile | 5 |
widget | 1 |
gfx | 1 |
List of changesets:
Ryan VanderMeulen | Backed out changeset 9bf2a5b5162d (Bug 1044975) - 1dd4fb21d976 |
Ryan VanderMeulen | Backed out changeset d89ec5b69c01 (Bug 1076825) - 1233c159ab6d |
Ryan VanderMeulen | Backed out changeset bbc35ec2c90e (Bug 1061214) - 6b3eed217425 |
Jon Coppeard | Bug 1061214. r=terrence, a=sledru - a485602f5cb1 |
Ryan VanderMeulen | Backed out changeset e8360a0c7d74 (Bug 1074378) - 7683a98b0400 |
Richard Newman | Bug 1077645 - Be paranoid when parsing external intent extras. r=snorp, a=sylvestre - 628f8f6c6f72 |
Richard Newman | Bug 1079876 - Handle unexpected exceptions when reading external extras. r=mfinkle, a=sylvestre - 96bcea5ee703 |
David Keeler | Bug 1058812 - mozilla::pkix: Add SignatureAlgorithm::unsupported_algorithm to better handle e.g. roots signed with RSA/MD5. r=briansmith, a=sledru - 4c62d5e8d5fc |
David Keeler | Bug 1058812 - mozilla::pkix: Test handling unsupported signature algorithms. r=briansmith, a=sledru - fe4f4c9342b1 |
http://release.mozilla.org/statistics/33/2014/10/13/fx-33-rc1-to-rc2.html
|
Daniel Glazman: Happy birthday Disruptive Innovations! |
Eleven
years ago, I was driving as fast as I could to leave two administrative
files, one in Saint-Quentin en Yvelines, one in Versailles. At 2pm, the
registration of Disruptive Innovations as a LLC was done and the
company could start operating in the open. Eleven years ago, holy cow
What a ride, so many fun years, so many clients and projects, so
much code. Current state of mind? Still disruptive and still innovating.
Code and chutzpah !!!
|
Daniel Stenberg: What a removed search from Google looks like |
Back in the days when I participated in the starting of the Subversion project, I found the mailing list archive we had really dysfunctional and hard to use, so I set up a separate archive for the benefit of everyone who wanted an alternative way to find Subversion related posts.
This archive is still alive and it recently surpassed 370,000 archived emails, all related to Subversion, for seven different mailing lists.
Today I received a notice from Google (shown in its entirety below) that one of the mails received in 2009 is now apparently removed from a search using a name – if done within the European Union at least. It is hard to take this seriously when you look at the page in question, and as there aren’t that very many names involved in that page the possibilities of which name it is aren’t that many. As there are several different mail archives for Subversion mails I can only assume that the alternative search results also have been removed.
This is the first removal I’ve got for any of the sites and contents I host.
Hello,
Due to a request under data protection law in Europe, we are no longer able to show one or more pages from your site in our search results in response to some search queries for names or other personal identifiers. Only results on European versions of Google are affected. No action is required from you.
These pages have not been blocked entirely from our search results, and will continue to appear for queries other than those specified by individuals in the European data protection law requests we have honored. Unfortunately, due to individual privacy concerns, we are not able to disclose which queries have been affected.
Please note that in many cases, the affected queries do not relate to the name of any person mentioned prominently on the page. For example, in some cases, the name may appear only in a comment section.
If you believe Google should be aware of additional information regarding this content that might result in a reversal or other change to this removal action, you can use our form at https://www.google.com/webmasters/tools/eu-privacy-webmaster. Please note that we can’t guarantee responses to submissions to that form.
The following URLs have been affected by this action:
http://svn.haxx.se/users/archive-2009-08/0808.shtml
Regards,
The Google Team
http://daniel.haxx.se/blog/2014/10/12/how-a-removed-search-from-google-looks-like/
|
Christian Heilmann: Evangelism conundrum: Don’t mention the product |
Being a public figure for a company is tough. It is not only about what you do wrong or right – although this is a big part. It is also about fighting conditioning and bad experiences of the people you are trying to reach. Many a time you will be accused of doing something badly because of people’s preconceptions. Inside and outside the company.
One of these conditionings is the painful memory of the boring sales pitch we all had to endure sooner or later in our lives. We are at an event we went through a lot of hassle to get tickets for. And then we get a presenter on stage who is “excited” about a product. It is also obvious that he or she never used the product in earnest. Or it is a product that you could not care less about and yet here is an hour of it shoved in your face.
Many a time these are “paid for” speaking slots. Conferences offer companies a chance to go on stage in exchange for sponsorship. These don’t send their best speakers, but those who are most experienced in delivering “the cool sales pitch”. A product the marketing department worked on hard to not look like an obvious advertisement. In most cases these turn out worse than a – at least honest – straight up sales pitch would have.
I think my favourite nonsense moment is “the timelapse excitement”. That is when when a presenter is “excited” about a new feature of a product and having used it “for weeks now with all my friends”. All the while whilst the feature is not yet available. It is sadly enough often just too obvious that you are being fed a make-believe usefulness of the product.
This is why when you go on stage and you show a product people will almost immediately switch into “oh god, here comes the sale” mode. And they complain about this on Twitter as soon as you mention a product for the first time.
This is unfair to the presenter. Of course he or she would speak about the products they are most familiar with. It should be obvious when the person knows about it or just tries to sell it, but it is easier to be snarky instead of waiting for that.
From your company you get pressure to talk more about your products. You are also asked to show proof that what you did on stage made a difference and got people excited. Often this is showing the Twitter time line during your talk which is when a snarky comment can be disastrous.
Many people in the company will see evangelists as “sales people” and “show men”. Your job is to get people excited about the products they create. It is a job filled with fancy hotels, a great flight status and a general rockstar life. They either don’t understand what you do or they just don’t respect you as an equal. After all, you don’t spend a lot of time coding and working on the product. You only need to present the work of others. Simple, isn’t it? Our lives can look fancy to the outside and jealousy runs deep.
This can lead to a terrible gap. You end up as a promoter of a product and you lack the necessary knowledge that makes you confident enough to talk about it on stage. You’re seen as a sales guy by the audience and as a given by your peers. And it can be not at all your fault as your attempts to reach out to people in the company for information don’t yield any answers. Often it is fine to be “too busy” to tell you about a new feature and it should be up to you to find it as “the documentation is in the bug reports”.
Often your peers like to point out how great other companies are at presenting their products. And that whilst dismissing or not even looking at what you do. That’s because it is important for them to check what the competition does. It is less exciting to see how your own products “are being sold”.
Frustration is the worst thing you can experience as an evangelist.
Your job is to get people excited and talk to another. To get company information out to the world and get feedback from the outside world to your peers. This is a kind of translator role, but if you look deep inside and shine a hard light on it, you are also selling things.
Bruce Lawson covered that in his talk about how he presents. You are a sales person. What you do though is sell excitement and knowledge, not a packaged product. You bring the angle people did not expect. You bring the inside knowledge that the packaging of the product doesn’t talk about. You show the insider channels to get more information and talk to the people who work on the product. That can only work when these people are also open to this. When they understand that any delay in feedback is not only seen as a disappointment for the person who asked the question. It is also diminishing your trustworthiness and your reputation and without that you are dead on stage.
In essence, do not mention the product without context. Don’t show the overview slides and the numbers the press and marketing team uses. Show how the product solves issues, show how the product fits into a workflow. Show your product in comparison with competitive products, praising the benefits of either.
And grow a thick skin. Our jobs are tiring, they are busy and it is damn hard to keep up a normal social life when you are on the road. Each sting from your peers hurts, each “oh crap, now the sales pitch starts” can frustrate you. You’re a person who hates sales pitches and tries very hard to be different. Being thrown in the same group feels terribly hurtful.
It is up to you to let that get you down. You could also concentrate on the good, revel in the excitement you see in people’s faces when you show them a trick they didn’t know. Seeing people grow in their careers when they repeat what they learned from you to their bosses.
If you aren’t excited about the product, stop talking about it. Instead work with the product team to make it exciting first. Or move on. There are many great products out there.
http://christianheilmann.com/2014/10/12/evangelism-conundrum-dont-mention-the-product/
|
Rob Hawkes: Leaving Pusher to work on ViziCities full time |
On the 7th of November I'll be leaving my day job heading up developer relations at Pusher. Why? To devote all my time and effort toward ensuring ViziCities gets the chance it very much deserves. I'm looking to fund the next 6–12 months of development and, if the opportunity is right, to build out a team to accelerate the development of the wider vision for ViziCities (beyond 3D visualisation of cities).
I'm no startup guru (I often feel like I'm making this up as I go), all I know is that I have a vision for ViziCities and, as a result of a year talking with governments and organisations, I'm beyond confident that there's demand for what ViziCities offers.
Want to chat? Send me an email at rob@vizicities.com. I'd love to talk about potential options and business models, or simply to get advice. I'm not ruling anything out.
Probably. I certainly don't do things by halves and I definitely thrive under immense pressure with the distinct possibility of failure. I've learnt that life isn't fulfilling for me unless I'm taking a risk with something unknown. I'm obsessed with learning something new, whether in programming, business or something else entirely. The process of learning and experimentation is my lifeblood, the end result of that is a bonus.
I think quitting your day job without having the funding in place to secure the next 6 to 12 months counts as immense pressure, some may even call it stupid. To me it wasn't even a choice; I knew I had to work on ViziCities so my time at Pusher had to end, simple. I'm sure I'll work the rest out.
Let me be clear. I thoroughly enjoyed my time at Pusher, they are the nicest bunch of people and I'm going to miss them dearly. My favourite thing about working at Pusher was being around the team every single day. Their support and advice around my decision with ViziCities has really helped other the past few weeks. I wish them all the best for the future.
As for my future, I'm absolutely terrified about it. That's a good thing, it keeps me focused and sharp.
Over the past 18 months ViziCities has evolved from a disparate set of exciting experiments into a concise and deliberate offering that solves real problems for people. What has been learnt most over that time is that visualising cities in 3D isn't what makes ViziCities so special (though it's really pretty), rather it's the problems it can solve and the ways it can help governments, organisations and citizens. That's where ViziCities will make its mark.
After numerous discussions with government departments and large organisations worldwide it's clear that not only can ViziCities solve their problems, it's also financially viable as a long-term business. The beauty of what ViziCities offers is that people will always need tools to help turn geographic data into actionable results and insight. Nothing else provides this in the same way ViziCities can, both as a result of the approach but also as a result of the people working on it.
ViziCities now needs your help. I need your help. For this to happen it needs funding, and not necessarily that much to start with. There are multiple viable business models and avenues to explore, all of which are flexible and complementary, none of which compromise the open-source heart.
I'm looking to fund the next 6–12 months of development, and if the opportunity is right, to build out a team to accelerate the development of the wider vision for ViziCities (beyond 3D visualisation of cities).
I'll be writing about the quest for funding in much more detail.
This is the part where you can help. I can't magic funds out of no where, though I'm trying my best. I'd love to talk about potential options and business models, or simply to get advice. I'm not ruling anything out.
Want to chat? Send me an email at rob@vizicities.com.
http://feedproxy.google.com/~r/rawkes/~3/3pH2ckFj4EY/leaving-pusher-to-work-on-vizicities
|