Mitchell Baker: A Return to Founders as Mozilla Moves Forward |
I’m happy to welcome Brendan Eich to a new role at Mozilla, that of our CEO. I also want to thank Jay Sullivan for his dedication to Mozilla over the years and in particular as our acting CEO this last year.
Brendan has been an absolutely foundational element of Mozilla and our success for the past 15 years. The parallels between Mozilla’s history and our future are very strong, and I am very happy with this combination of continuity and change to help us continue to fulfill our mission, as Mozilla has big ambitions: providing a rich, exciting online experience that puts people at the center of digital life.
We exceeded our wildest dreams with Firefox when we first released it 10 years ago. We moved the desktop and browsing environments to a much more open place, with far more options and control available to individuals.
When I look back at the early days that led to Firefox, I think mostly of the personalities that achieved this great success. Mozilla was a small band of people, mostly volunteers and a few employees, bound together by a shared mission and led by Brendan and me as co-founders of the Mozilla project. We were an unusual group to have such huge ambitions. Looking at us, most people thought we would never succeed. But we did. We succeeded because like-minded people joined with us to make Mozilla so much stronger, and to create amazing products that embody our values — products that people love to use.
Today we live in a different online era. This era combines desktop, mobile devices, cloud services, big data and a social layer. It is feature-rich, highly centralized, and focused on a few giant organizations that exert control over almost all aspects of the experience. Today’s computing environment is deeply in need of an open, exciting alternative that shows what the Open Web brings to this setting — something built on parts including Firefox OS, WebGL, asm.js, and the many other innovations being developed at Mozilla. It is comparable to the desktop computing environment we set out to revolutionize when we started Mozilla.
Mozilla needs to bring a similar scope of change to the new computing era. Once again, Mozilla needs to break down the walled gardens of online life and bring openness and opportunity to all. Once again, we have the chance to build products and communities in a way that no one else will. Once again, we will build something new and exciting.
Over the years I’ve worked with Brendan, we’ve each had a variety of roles, and we have always had a great partnership. I look forward to working closely together in this phase of Mozilla’s development. I hope you’ll join us.
mitchell
https://blog.lizardwrangler.com/2014/03/24/a-return-to-founders-as-mozilla-moves-forward/
|
Rob Campbell: GDC 2014 Devtools Demo Video |
Showcasing the Console + Inspector, preview of the Canvas Debugger, JS Debugger and Shader Editor.
|
Seif Lotfy: Sc(r)um |
I am entitled to my opinion and in my honest opinion: scrum sucks big time
Here is why (again my opinion).
Does not empower developers to become creative. Good way to sell that the developers are doing something (so that failure is not their fault) even if running around in circles ==> plausible deniability
Highly motivated developers are limited from jumping out of the “plan” to hack on other project related stuff (even doing so in free time is regarded wrong since you are not sticking to a plan). Thus limiting the personal success story and relying on motivation to come from team achievements. So being a weak link in a team just makes matters worse.
The “Daily Stand-up" routine is not productive in its defined structure of:
Stories over quality: Developers are keen to finish as much stories as possible which could to take a toll on the quality.
Not applicable on a distributed community of volunteers, or when dependency on open source community exist.
Time-boxed problem solving does not guarantee best possible outcome.
The commitment: One of the reason Scrum fails is its approach on commitment. "Deliver the maximum value of stories in a specific time frame”. As soon as the team can not agree on committing to a story all other stories with lower priority are ignored. And once done with those stories you pick the one out of the backlog that you couldn’t agree on (Since it has the highest priority) and waste time on it. You can’t skip this story since it has the highest priority. To me this is like sitting in an exam and being stuck on a problem. Instead of moving to the next problem with the hope of gathering as much points as possible, one decided to be stubborn and try fucking with that problem. That is just stupid. Now assume I solved the problem. Scrum now forces me to pick something from the backlog. And not anything. But the one with the highest priority and again waste time being stuck on that until the time ends. If would have adapted this style of tackling problems during school I would have never passed.
Don't get me wrong Scrum can work don't know how personally but I do believe it can work with the right team and people, however what are the chances of it succeeding if 2 or more of the following apply:
You don't have full control of the stack (dependency on third party)
Your developers are not all on the same level of expertise and knowledge.
Shoehorned into an ongoing running process and code base.
Stories of the team are not focused on a specific topic but on the general delivery of the whole project.
I look at GNOME, Mozilla and the Kernel. They don’t follow this methodology and stuff happens and gets delivered. So much big companies can learn from FLOSS.
|
Rob Campbell: GDC 2014 |
I was fortunate to be able to attend the Game Developers Conference in San Francisco this year. Thanks to our organizers and IT staff for all the hard work they put into making everything run so smoothly.
This was the first year Mozilla had an actual booth on the show floor and we put it to good use demoing our Developer Tools alongside some fun games. We showed off our new Canvas Debugger (should be landing next week!), the Shader Editor as well as our Debugger and our other inspection tools. People were really receptive to the Canvas tool. The Shader Editor got a fair number of positive comments as well. I was also able to show off our Network panel as a temporary solution for inspecting game assets like textures.
Another well-received demo was a setup where I paused my desktop JS Debugger when receiving a device orientation event on my phone. I loaded the three.js DeviceOrientation demo on my phone’s browser (Firefox for Android). I then connected the phone via USB to my laptop and launched our remote tools via the “connect” option. Opening the Events panel, I was able to pick “deviceorientation” as a category and selecting that caused execution on the phone to immediately pause with my desktop debugger showing the exact location.
Debugging device events is easy to do on a mobile device. I was also able to demo our Shader Editor running on mobile which was pretty cool. Editing shaders in real-time running on a remote device is some real science fiction level stuff.
Having the kind of immediate feedback for WebGL (and soon WebAudio) that our tools provide is kind of a big deal for people who aren’t used to living in a dynamic environment like a web browser. There is lots of opportunity in this space to make tools for game developers that are fun to use and interactive. You can literally program your game while playing it.
This feels like a tipping point for games on the web. There are now multiple engine developers offering the Web as a bona fide deployment target. Three big engines have reduced their pricing models to the point of being effectively free for most developers and that happened just this week. This is a big deal and I think we’re going to start seeing a lot of game publishers shipping games to the web very soon.
We also weren’t the only booth showing off HTML5-related game technology. Nintendo is shipping a “Web Framework” around a bundled WebKit shell for deployment on the WiiU and had a pretty sizeable installation to show it off. Unity is also making that a deployment target. Various other booths were demoing HTML5 games and tech.
In the emerging technology department, head-mounted displays were in full-evidence. Sony just announced a new piece of head-gear for the PS4 and there were some other vendors kicking around similar technologies. At this point, it seems obvious that head-displays are going to be very real, very soon. The lines of people at Oculus’ displays were a constant stream of humanity.
gg;hf.
|
Christian Heilmann: Edgeconf 3 – just be there next time, trust me |
I just got back from Edgeconf 3 in London, England, and I am blown away by how good the event was. If you are looking for a web conference that is incredible value for money, look no further.
The main difference of Edgeconf is its format. Whilst you had a stellar line-up of experts, the conference is not a series of talks or even several tracks in parallel. Instead, it is a series of panels with curated Q&A in the style of Question Time on BBC. Questions are submitted by the audience before the conference using Google Moderator and expert moderator triage and collate the questions. Members from the audience read out the questions to the panel and the moderator then picks experts to answer them. Audience members also can show their intent to ask a question or offer extra information.
In essence: the whole conference is about getting questions answered, not about presenting. This means that there is a massive amount of information available in a very short amount of time and there is no chance to grand-stand or advocate solutions without proof.
The main workload of the conference is covered by the moderators. It is up to them to not only triage the questions but also keep the discussion lively and keep it entertaining.
All the moderators met the day before the event and spent half a day going through all the submitted questions and whittle them down to seven per panel. Each person answering a question has 30 seconds to a minute to answer and there is strict time-keeping.
The whole event was streamed live on YouTube and the recordings are available on Youtube/Google+.
During the panels, the audience can interact live using the Onslyde system. You can agree or disagree with a topic and request to speak or ask a question. All this information is logged and can be played in sync with the video recording later on. Onslyde also creates analytics reports showing sentiment analysis and more. Other conferences like HTML5DevConf, Velocity and OsCon also started using this system.
Another big thing about Edgeconf is that any of the extra income from tickets and sponsorship (in this case around lb10,000) get donated to a good cause. At the end of the conference the organisers showed a full disclosure of expenditure. The cause this time was Codeclub, a charity teaching kids coding.
I am very proud to have been one of the moderators this time and run the accessibility panel (a detailed post on this comes later).
I have to thank the organisers and everyone involved for a great event. I learned a lot during the day and I am happy to be involved again in September.
http://christianheilmann.com/2014/03/22/edgeconf-3-just-be-there-next-time-trust-me/
|
David Burns: Treeclosure stats |
As the manager of the sheriffs I am always interested in how ofter the sheriffs, and anyone else, closes the tree. For those who don't know who the Mozilla Sheriffs are, they are the team that manage the code landing in a number of Mozilla trees. If a bad patch lands they are the people who typically back it out. There has been some recent changes in the way the infrastructure does things which has led to a few extra closures. Not having the data for this I went and got it (you can see the last year's worth of data for Mozilla-Inbound below)
2013-03 infra: 14:59:38 no reason: 5 days, 12:13:31 total: 6 days, 3:13:09 2013-04 infra: 22:21:18 no reason: 3 days, 19:30:21 total: 4 days, 17:51:39 2013-05 infra: 1 day, 2:03:08 no reason: 4 days, 11:30:41 total: 5 days, 13:33:49 2013-06 checkin-compilation: 10:04:17 checkin-test: 1 day, 5:48:15 infra: 18:44:06 no reason: 5:05:59 total: 2 days, 15:42:37 2013-07 backlog: 22:38:39 checkin-compilation: 1 day, 13:05:52 checkin-test: 2 days, 16:43:53 infra: 1 day, 2:16:02 no reason: 0:30:13 other: 1:32:23 planned: 4:59:09 total: 6 days, 13:46:11 2013-08 backlog: 4:13:49 checkin-compilation: 1 day, 23:49:34 checkin-test: 1 day, 12:32:35 infra: 13:06:19 total: 4 days, 5:42:17 2013-09 backlog: 0:21:39 checkin-compilation: 1 day, 8:27:27 checkin-test: 2 days, 15:17:50 infra: 15:34:16 other: 2:02:07 planned: 3:16:22 total: 4 days, 20:59:41 2013-10 checkin-compilation: 15:29:45 checkin-test: 3 days, 10:41:33 infra: 16:31:41 no reason: 0:00:05 other: 0:09:01 planned: 2:30:35 total: 4 days, 21:22:40 2013-11 checkin-compilation: 1 day, 9:40:25 checkin-test: 4 days, 18:41:35 infra: 1 day, 19:11:36 no reason: 0:05:54 other: 3:28:40 planned: 1:50:20 total: 8 days, 4:58:30 2013-12 backlog: 5:07:06 checkin-compilation: 18:49:29 checkin-test: 1 day, 16:29:16 infra: 6:30:03 total: 2 days, 22:55:54 2014-01 backlog: 1:54:43 checkin-compilation: 20:52:34 checkin-test: 1 day, 12:22:01 infra: 1 day, 5:37:14 no reason: 1:20:46 other: 4:53:42 planned: 3:48:16 total: 4 days, 2:49:16 2014-02 backlog: 3:08:18 checkin-compilation: 1 day, 12:26:35 checkin-test: 15:30:42 infra: 19:40:38 no reason: 0:00:16 other: 0:47:38 total: 3 days, 3:34:07 2014-03 backlog: 8:52:34 checkin-compilation: 19:27:21 checkin-test: 1 day, 0:37:55 infra: 19:47:13 other: 2:53:21 total: 3 days, 3:38:24
I created a graph of the data showing Mozilla Inbound since we started using it in August 2012 till now.
The first part of the graph there wasn't any data for specific things but the sheriffs changed that in the middle of last year. I am hoping that we can get information like this, and other interesting back out info into a "Tree Health Report" in Treeherder (The TBPL replacement the Automation and Tools Team is developing).
http://www.theautomatedtester.co.uk/blog/2014/treeclosure-stats.html
|
Planet Mozilla Blog: New Peer: Mike Hoye |
Some management changes in the planet module. Mike Hoye has generously offered to step up and join us as a peer.
https://blog.mozilla.org/planet/2014/03/21/new-peer-mike-hoye/
|
Ben Hearsum: This week in Mozilla RelEng – March 21st, 2014 |
Major Highlights:
Completed work (resolution is ‘FIXED’):
In progress work (unresolved and not assigned to nobody):
http://hearsum.ca/blog/this-week-in-mozilla-releng-march-21st-2014/
|
David Boswell: Sharks, Parachutes and Hard Hats |
Mozilla’s goals for 2014 were shared out recently and included information about how we’ll increase the number of active contributors in the community this year by 10x.
Since it is sometimes easier to digest information in images than in words, I wanted to follow up with some pictures that show what our approach is to achieve this goal (thanks to Pierros for making these).
Today many teams are working on a project and would love to get some help. There are many people who want to help, but there are often obstacles that stand in the way of connecting (thankfully those obstacles aren’t usually sharks).
The Engagement, People and Foundation teams are here to help. Engagement will increase the number of people who are excited about wanting to contribute (that’s the parachutes). The community builders on the People team (those are the hard hats) will help build pathways that let people cross the chasm and connect with projects.
Once those pathways are built and many more people are joining the project as active contributors, the People team will offer support to projects as they evolve and adapt to work with a larger group of Mozillians by offering systems, education and more.
If you’re looking for help increasing the number of active contributors for your project, we’re happy to support you. Get in touch by joining and posting to the community building mailing list or joining the regular Grow Mozilla discussions.
http://davidwboswell.wordpress.com/2014/03/21/sharks-parachutes-and-hard-hats/
|
Yunier Jos'e Sosa V'azquez: Prueba la nueva, r'apida y f'acil de configurar interfaz de Firefox |
Ayer 20 de marzo Mozilla actualiz'o el canal Beta de Firefox y con ello, han llegado caracter'isticas la mejora de Firefox Sync a trav'es de las nuevas cuentas Firefox y la nueva interfaz de usuario en la que se ha venido trabajando desde varios a~nos.
El nuevo Firefox Sync se puede puede probar en Windows, Linux, Mac y Android, creando una cuenta Firefox tendr'as una segura y f'acil v'ia para llevar tu Firefox a cualquier lugar contigo. Firefox Sync ahora es m'as f'acil de configurar y permite a~nadir m'ultiples dispositivos, mientras seguimos entregando el mismo cifrado basado en el navegador.
Firefox ha sido redise~nado con un look m'as moderno para acceder r'apidamente al contenido Web. Se han cambiado la estructura y funcionalidades en Firefox para dar foco al contenido Web mostrado.Las pesta~na son m'as fluidas y un poco m'as grandes para mejorar la experiencia en pantallas t'actiles, las pesta~nas que no est'an en uso visualmente se desenfocan. El administrador de marcadores se ha movido hacia la Barra de Herramientas.
La nueva forma de personalizaci'on a la cual se puede acceder desde el nuevo men'u Firefox posibilidad arrastrar y soltar complementos, controles y las caracter'isticas favoritas hacia cualquier lugar del del navegador. Mejorando tu experiencia de navegaci'on ahora podr'as encontrar todos los elementos anteriormente mencionados en un mismo lugar el nuevo men'u en forma de panel.
‹input type="number"›
de HTML5 habilitado-moz
no es necesario cuando se usa la propiedad CSS3 box-sizing
Pueden obtener esta versi'on desde la secci'on Beta nuestra zona de Descargas para Linux y Windows en espa~nol.
http://firefoxmania.uci.cu/prueba-la-nueva-interfaz-en-firefox-beta/
|
Armen Zambrano Gasparnian: Running B2G reftests on AWS on mozilla-central and trunk based trees |
http://armenzg.blogspot.com/2014/03/running-b2g-reftests-on-aws-on-mozilla.html
|
Benjamin Smedberg: Using Software Copyright To Benefit the Public |
Imagine a world where copyright on a piece of software benefits the world even after it expires. A world where eventually all software becomes Free Software.
The purpose of copyright is “To promote the Progress of Science and useful Arts”. The law gives a person the right to profit from their creation for a while, after which everyone gets to profit from it freely. In general, this works for books, music, and other creative works. The current term of copyright is far too long, but at least once the term is up, the whole world gets to read and love Shakespeare or Walter de la Mare equally.
The same is not true of software. In order to be useful, software has to run. Imagine the great commercial software of the past decade: Excel, Photoshop, Pagemaker. Even after copyright expires on Microsoft Excel 95 (in 2090!), nobody will be able to run it! Hardware that can run Windows 95 will not be available, and our only hope of running the software is to emulate the machines and operating systems of a century ago. There will be no opportunity to fix or improve the software.
What should we reasonably require from commercial software producers in exchange for giving them copyright protection?
The code.
In order to get any copyright protection at all, publishers should be required to make the source code available. This can either happen immediately at release, or by putting the code into escrow until copyright expires. This needs to include everything required to build the program and make it run, but since the same copyright rules would apply to operating systems and compilers, it ought to all just work.
The copyright term for software also needs to be rethought. The goal when setting a copyright term should be to balance the competing desires of giving a software author time to make money by selling software, with the natural rights of people to share ideas and use and modify their own tools.
With a term of 14 years, the following software would be leaving copyright protection around now:
A short copyright term is an incentive to software developers to constantly improve their software, and make the new versions of their software more valuable than older versions which are entering the public domain. It also opens the possibility for other companies to support old software even after the original author has decided that it isn’t worthwhile.
The European Union is currently holding a public consultation to review their copyright laws, and I’ve encouraged Mozilla to propose source availability and a shorter copyright term for software in our official contribution/proposal to that process. Maybe eventually the U.S. Congress could be persuaded to make such significant changes to copyright law, although recent history and powerful money and lobbyists make that difficult to imagine.
Commercial copyrighted software has done great things, and there will continue to be an important place in the world for it. Instead of treating the four freedoms as ethical absolutes and treating non-Free software as a “social problem”, let’s use copyright law to, after a period of time, make all software Free Software.
http://benjamin.smedbergs.us/blog/2014-03-21/using-software-copyright-to-benefit-the-public/
|
Mark Coggins: Rormix, a great new Firefox OS app to discover emerging music... |
Rormix, a great new Firefox OS app to discover emerging music videos, brought to the Firefox Marketplace as part of the Phones for Cordova/PhoneGap ports program.
|
Henrik Skupin: Join our first Automation Training days on March 24/26 |
Building software is fun. Spending countless hours or even days on something to get it finally working. Helping someone who uses your software to speed-up the daily workflow. All that is fantastic and every developer loves that. But don’t let the dark side come up, when customers are pointing you to a lot of software defects. Are you still proud and will you continue the work as how you did it before?
Most likely not. Or well, lets say at least not when quality is what you want to ship. So you will start to think about how to test your application. You can do a lot of manual testing based on test plans or just do exploratory testing. That will work as long as your application is not complex enough, and can be done in a couple of minutes. But once you have to do the same kind of steps over and over again for each release, you will find it boring and loose the interest or concentration on it. Failures will happen so that issues slip through your testing, and bugs becoming part of the new release.
That’s indeed something you eventually want to kill in the future. But how? There is an easy answer to this question! Use test automation! Create tests for each feature you implement, or regression you got fixed. Over time the enlarged suite of tests will make you happy, given that you have to spend nearly no time on manual tests, and have results in a split of the time needed before. Releasing new versions of the application can be done much faster.
At some point, when your application is large enough, you might even not work alone anymore on that product. There will be other developers, or even software testers whose job is to plan and execute the testing strategy. Given that in the past there was not such a high demand on automation knowledge for them, the requirements for jobs have been changed in the past months. So lesser companies will hire engineers for quality assurance who do not have a coding background. This is hard for them, given that it can take ages to find a new position. Something has to change for them.
We, the Firefox Automation team at Mozilla want to help out here. Given our knowledge in automation for various Mozilla related projects, our goal is to support interested people in gaining their knowledge in software development and especially test automation. Therefor we are planning to have automation trainings on a regular basis. And all based on our own projects, so you will have the chance to practice all the new things, which you have learned. All that indeed depends on the acceptance for that offer, and the number of participants.
The first two training days will happen on March 24th and 26th, and will mainly take place in our #automation channel on IRC. Given that we have no idea how many of you will join us during that day, and what your knowledge is, we will start with the basics. That means we will guide you through courses of Javascript, Python, HTML, or CSS. We will collect your feedback and extend the etherpad for automation trainings to finally have a wonderful list of getting started tutorials.
For those of you who already have more experience, we will offer tasks to work on depending on your skills and directions. Please see the before mentioned etherpad for areas of work and appropriate mentors. We will guarantee that it will be fun!
We would love to see you next week, and if you have questions, don’t hesitate to ask here, or in the automation mailing list.
http://www.hskupin.info/2014/03/21/join-our-first-automation-training-days-on-march-2426/
|
Andrew Halberstadt: Part 1: Sharing code is not always a good thing |
As programmers, we are taught early on that code duplication is bad and should be avoided at all cost. It makes code less maintainable, reusable and readable. The DRY principle is very basic and fundamental to how most of us approach software design. If you aren't familiar with the DRY principle, please take a minute to read the wikipedia page on it. The counterpart of DRY, is WET (write everything twice). In general, I agree that DRY is good and WET is bad. But I think there are a class of problems where the DRY approach can actually be harmful. For these types of problems, I will make a claim that a WET approach can actually work better.
So what are these problems? They are problems that have continuously evolving unpredictable requirements. Continuously evolving means that the project will continue to receive additional requirements indefinitely. Unpredictable means that you won't know when the requirements will change, how often they'll change, or what they might be.
Hold on a second, you might be thinking. If the requirements are so unpredictable, then shouldn't we be creating a new project to address them instead of trying to morph an old one to meet them? Yes! But there's a catch (hey, and isn't starting a new project just a form of code duplication?). The catch is that the requirements are continuously evolving. They change a little bit at a time over long periods (years). Usually at the beginning of a project it is not possible to tell whether the requirements will be unpredictable, or even if they will be continuously evolving. It isn't until the project has matured, and feature creep vines are firmly entrenched that these facts become apparent and by this time it is often too late. Because "continuously evolving unpredictable requirements" is a mouthful to say, I've invented an acronym to describe them. From here on out I will refer to them as IFFY (in flux for years) requirements.
This probably sounds very hand wavy at the moment, so let me give an example of a problem that has IFFY requirements. This example is what I primarily work on day to day and is the motivation behind this post, test harnesses. A test harness is responsible for testing some other piece of software. As that other piece of software evolves, so too must the test harness. If the system under test adds support for a wizzlebang, then the harness must also add support for testing a wizzlebang. Or closer to home, if the system under test suddenly becomes multiprocess, then the harness needs to support running tests in both parent and child processes. Usually the developer working on the test harness does not know when or how the system under test will evolve in a way that requires changes to the harness. The requirements are in flux for as long as the system under test continues to evolve. The requirements are IFFY.
Hopefully you now have some idea about what types of problems might benefit from a WET approach. But so far I haven't talked about why WET might be helpful and why DRY might be harmful. To do this, I'd like to present two case studies. The first is an example of where sticking to the DRY principle went horribly wrong. The second is an example of where duplicating code turned out to be a huge success.
Most of our test harnesses have life cycles that follow a common pattern. Originally they were pretty simple, consisting of a single file that unsurprisingly ran the test suite in Firefox. But as more harnesses were created, we realized that they all needed to do some common things. For example they all needed to launch Firefox, most of them needed to modify the profile in some way, etc. So we factored out the code that would be useful across test harnesses into a file called automation.py. As Firefox became more complicated, we needed to add more setup steps to the test harnesses. Automation.py became a dumping ground for anything that needed to be shared across harnesses (or even stuff that wasn't shared across harnesses, but in theory might need to be in the future). So far, there wasn't really a huge problem. We were using inheritance to share code, and sure maybe it could have been organized better, but this was more the fault of rushed developers than anything inherently wrong with the design model.
Then Mozilla announced they would be building an Android app. We scrambled to figure out how we could get our test suites running on a mobile device at production scale. We wrote a new Android specific class which inherited from the main Firefox one. This worked alright, but there was a lot of shoe-horning and finagling to get it working properly. At the end, we were sharing a fair amount of code with the desktop version, but we were also overriding and ignoring a lot of code from the desktop version. A year or so later, Mozilla announced that it would be working on B2G, an entire operating system! We went through the same process, again creating a new subclass and trying our darndest to not duplicate any code. The end result was a monstrosity of overrides, subtle changing of state, no separation of concerns, different command line options meaning different things on different platforms, the list goes on. Want to add something to the desktop Firefox version of the test harness? Good luck, chances are you'd break both Fennec and B2G in the process. Want to try and follow the execution path to understand how the harness works? Ha!
At this point you are probably thinking that this isn't the fault of the DRY principle. This is simply a case of not architecting it properly. And I completely agree! But this brings me to a point I'd like to make. Projects that have IFFY requirements are *insanely* difficult to implement properly in a way that adheres to DRY. Fennec and B2G were both massive and unpredictable requirement changes that came out of nowhere. In both cases, we were extremely behind on getting tests running in continuous integration. We had developers and product managers yelling at us to get *something* running as quickly as possible. The longer we took, the bigger the backlog of even newer requirement changes became. We didn't have time to sit down and think about the future, to implement everything perfectly. It was a mad dash to the finish line. The problem is exacerbated when you have many people all working on the same set of issues. Now you've thrown people problems into the mix and it's all but impossible to design anything coherent.
Had we simply started anew for both Fennec and B2G instead of trying to share code with the desktop version of the harness, we would have been much better off.
To re-iterate my opening paragraph, I'm not arguing that DRY is bad, or that code duplication is good. At this point I simply hope to have convinced you that there exist scenarios where striving for DRY can lead you into trouble. Next, I'll try to convince you that there exist scenarios where a WET approach can be beneficial.
Ask anyone on releng what their most important design considerations are when they approach a problem. My guess is that somewhere on that list you'll see something about "configurability" or "being explicit". This basically means that it needs to be as easy as possible to adapt to a changing requirement. Adaptability is a key skill for a release engineer, they've been dealing with changing requirements since the dawn of computer science. The reality is that most release engineers have already learned the lesson I'm just starting to understand now (a lesson I am only beginning to see because I happen to work pretty closely with a large number of really awesome release engineers).
Hopefully if I'm wrong about anything in this next part, someone from releng will correct me. Releng was in a similar situation as our team, except instead of test harnesses, it was buildbotcustom. Buildbotcustom was where most of the Mozilla-specific buildbot code lived. That is, it was the code responsible for preparing a slave with all of the build systems, harnesses, tests, environment and libraries needed to execute a test or build job. Similar to our test harnesses, changes in requirements quickly made buildbotcustom very difficult to update or maintain (Note: I don't know whether it was DRY or WET, but that's not really important for this case study).
To solve the problem, Aki created a tool called mozharness. At the end of the day, mozharness is just a glorified execution context for running a python script. You pass in some configuration and run a script that uses said configuration. In addition to that, mozharness itself provides lots of "libraries" (yes, shared code) for the scripts to use. But mozharness is genius for a few reasons. First, logging is built into its core. Second it is insanely configurable. But the third is this concept of actions. An action is just a function, a script is just a series of actions. Actions are meant to be as atomic and independent as possible. Actions can live as a library in mozharness core, or be defined by the script author themselves.
What is so special about actions? It allowed us to quickly create a large number of scripts that all did similar, but not quite the same things. Instead of worrying about whether the scripts share the same code or not, we just banged out a new one in ten minutes. Instead of having one complicated script trying to abstract the shared code into one place, we have one script per platform. As you may imagine, many of these scripts look very similar, there is quite a bit of code duplication going on. At first we had intended to remove the duplicated code, since we assumed it would be a pain to maintain. Instead it turned out to be a blessing in disguise.
Like with buildbotcustom and the test harnesses themselves, mozharness scripts also suffer from IFFY requirements. The difference is, now when someone says something like "We need to pass in --whizzlebang to B2G emulator reftests and only B2G emulator reftests", it's easy to make that change. Before we'd need to do some kind of complicated special casing that doesn't scale to make sure none of the other platforms are affected (not to mention B2G desktop in this case). Now? Just change a configuration variable that gets passed into the B2G emulator reftest script. Or worst case scenario, a quick change to the script itself. It is guaranteed that our change won't affect any of the other platforms or test harnesses because the code we need to modify is not shared.
We are now able to respond to IFFY requirements really quickly. Instead of setting us back days or weeks, a new requirement might only set us back a few hours or even minutes. With all this extra time, we can focus on improving our infrastructure, rather than always playing catchup. It's true that once in awhile we'll need to change duplicated code in more than one location, but in my experience the number of times this happens (in this particular instance at least) is exceedingly rare.
Oh, by the way, remember how I said this was a lesson releng had already learned? Take a look at these files. You'll notice that the same configuration is repeated over and over again, not only for different platforms and test harnesses, but also just different slave types! This may seem like a horrible idea, until you realize that all this duplication allows releng to be extremely flexible about what jobs get run on which branches and which types of slaves. It's a lot less work to maintain some duplication, than it is to figure out a way to share the configuration while maintaining the same level of speed and flexibility when dealing with IFFY requirements.
Hopefully by now I've convinced you that code duplication is not necessarily a bad thing, and that in some cases it isn't wise to blindly follow the DRY principle. If there's one takeaway from this post, it's to not take design principles for granted. Yes, code duplication is almost always bad, but that's not the same thing as always bad. Just be aware of the distinction, and use the case studies to try to avoid making the same mistakes.
So my project has IFFY requirements, should I just duplicate code whenever possible?
No.
Okay.. How do I know if I should use a DRY or WET approach then?
I don't think there is a sure fire way, but I have a sort of litmus test. Anytime you are wondering whether to consolidate some code, ask the question "If I duplicate this code in multiple places and the requirements change, will I need to update the code in every location? Or just one?". If you answered the former, then DRY is what you want. But if you answered the latter, then a WET approach just might be better. This is a difficult question to answer, and the answer is not black and white, usually it falls somewhere in-between. But at least the question gets you thinking about the answer in the first place which is already a big step forward. Another thing to take into consideration is how much time you have to architect your solution properly.
But if you could somehow have a solution that is both DRY and flexible to incoming requirement changes, wouldn't that be better?
Yes! Inheritance is one of the most obvious and common ways to share code, so I was a bit surprised at how horribly it failed us. It turns out that DRY is just one principle. And while we succeeded at not duplicating code, we failed at several other principles (like the Open/closed principle or the Single responsibility principle). I plan on doing a part 2 blog post that explores these other principles and possible implementations further. Stay tuned!
http://ahal.ca/blog/2014/part-1-sharing-code-not-always-good-thing/
|
Sylvestre Ledru: MozillaReleases account on Twitter |
Lately, we, the release team at Mozilla, have started to use again the MozillaReleases account on Twitter.
We are publishing news about the releases, new features, interesting bugs, etc.
If you have suggestions (more technical, more bug reports, etc), don't hesitate to share them in the comments.
Original post blogged on b2evolution.
http://sylvestre.ledru.info/blog/2014/03/21/mozillareleases-account-on-twitter
|
Dave Huseby: Python: More Precise Exception Messages |
http://blog.linuxprogrammer.org/Python%3A%20More%20Precise%20Exception%20Messages.html
|
Robert Nyman: Geek Meet April 2014 with Sam Dutton |
All seats have been taken. Please write a comment to be put on a waiting list, there are always a number of cancellations, so there’s still a chance.
With people working with developer relations, leading busy lives, it can sometimes be hard to find a good time for a meetup, syncing schedules and more. That’s why I’m glad we finally managed to find a date for the speaker of this upcoming Geek Meet!
I’m really happy to present Sam Dutton (Twitter, Google+)! He is a Developer Advocate at Google, an expert on WebRTC, an Australian expat living in London and an all-around great guy. He has also written a number of articles for HTML5 Rocks.
Sam will give two presentations during the evening:
This Geek Meet will be sponsored by Valtech, and will take place Monday April 7th at 18:00 in their office at Kungliga Myntet, Hantverkargatan 5 in Stockholm. They will also provide beer/wine and pizza to every attendee, all free of charge.
Please sign up with a comment below. Please only sign up if you know you can attend. There are 150 seats available, and you can only sign up yourself. Please use a valid name and e-mail address, since this will be used to identify you at the event to get in.
Geek Meet is available in a number of channels:
The hash tag used is #geekmeetsthlm.
All seats have been taken. Please write a comment to be put on a waiting list, there are always a number of cancellations, so there’s still a chance.
|
Seif Lotfy: Moving on... |
The last year has been a rough year. I started working for Deutsche Telekom (with a bunch of other FLOSS developers) on OpenStack. What started off as developing new features upstream ended up being more or less devops stuff (mainly Puppet and orchestration) which is not really my style.
During that time I was working on GNOME Music, I enjoyed developing it a lot, the students and the team are just amazing but I was growing unhappy with the external micromanagement that I experienced which personally stressed me and made me feel unappreciated for what I think I can bring to the table. What started off as a hobby became a burden, so at GUADEC I decided to spend time away from GNOME. Strangely enough I started feeling better, I took things a step further and switched to Mac and noticed what a long road we have for the Linux desktop. I became more convinced that the focus of delivering an experience for the desktop or mobile market is misplaced and we missed both trains. The only way for GNOME to continue being relevant is to either start focusing on wearable experience or focus on marketing the technology first then the experience. Get GNOME tech on other platforms and wearables.
Anyhow being unhappy can take a toll on the mental and physical health, I became a bit depressed and negative person and my health was declining. I went through this a couple of years ago when my father passed away and found an escape contributing to GNOME and FLOSS. So with that some decision had to be taken.
Somehow I can’t see myself actively contributing to GNOME anymore and maybe its for the best. I’ll be around and will help out when I can if asked directly. But I won’t be running GNOME actively.
I am also taking time off contributing to Mozilla. However I do see myself coming back soon since I am missing it already. I really enjoy the compatibility team (where we use some GNOME technologies including GI) and the community building team.
I will continue maintaining Zeitgeist but there will be no active feature development. After almost 6 years we all moved on and Zeitgeist is feature complete.
I quit my job. I am on a nice vacation for a bit and will be starting my new job sooner than expected. With a help of friends and my girl-friend I started getting my self-esteem back and all is better again.
Feeling better already :D (No depression)
|
Florian Qu`eze: Summer of Code Student application deadline approaching |
This is just a reminder to students interested in applying for Google Summer of Code 2014: the application deadline is "21 March 19:00 UTC" and no late application will be accepted.
If you intend to apply to Google Summer of Code this year and haven't submitted your application yet, don't wait: apply now!
http://blog.queze.net/post/2014/03/20/Summer-of-Code-Student-application-deadline-approaching
|