George Roter: Why Mozilla (for me)? |
It’s official. I’m here at Mozilla for the indefinite future with a title of Head of Core Contributors, Participation. Basically, I’m responsible for enabling a team of volunteers and staff to grow the size and impact of our community of most-committed volunteer Mozillians.
As I considered this role, I asked myself: Why Mozilla? Of all of the places in the world that I can apply my energy and talents, why here? I wanted to share my answer (as of today):
The past 150 years has brought the greatest advances in freedom and opportunity in human history.
It has also brought (a) existential, complex global and local challenges, and (b) a centralizing of power. Centralized power cannot solve, and is often the cause of, these existential challenges.
The web is the single greatest (and maybe only) chance humanity has to address these challenges, because it can decentralize power and unleash the human ingenuity of millions of people.
But the web itself is being centralized and made less open. From locked-down content, to ring-fenced platforms, to the advertising/ economics of the web, to technology stacks. The largest and most powerful organizations and governments in the world are eroding the openness of the web.
Mozilla is probably the world’s best chance to reverse this trend. We are the only organization in the world that is championing a vision of openness on the web, has the scale to achieve it, and as a mission-driven, not-for-profit doesn’t have its purpose corrupted by shareholders and profit motives.
At the same time, this is such a wildly ambitious organizational vision that only a movement of talented people working together — volunteer Mozillians and our allies — has a chance to see this vision become a reality.
What’s truly energizing about my role is that the Mozilla brand, user-base, financial resources and mythology is a platform to build a participation function that can scale to directly enabling millions to take actions aligned with their own passions and beliefs. This can be at the leading edge of what anyone has done before in organizing people globally and locally. And when we are successful, the web will be the platform we need to address humanity’s most pressing challenges.
Finally, to quote a great Canadian Marshall McLuhan, “the medium is the message”. The pattern of working that Mozilla is pioneering is transformative (or will be with the organizational changes that have been articulated in the vision of radical participation) — open, self-organizing and adaptive, creativity from the edges, distributed leadership and voice, each and every Mozillian accountable to each other and for the whole.
At a meta level, these are key to the broader global social justice changes I believe in. This pattern, and its impact on the millions of deep relationships we can build through participation, may be another of Mozilla’s enduring legacy and impact.
|
Air Mozilla: Web QA Weekly Meeting |
This is our weekly gathering of Mozilla'a Web QA team filled with discussion on our current and future projects, ideas, demos, and fun facts.
|
Byron Jones: happy bmo push day! |
the following changes have been pushed to bugzilla.mozilla.org:
discuss these changes on mozilla.tools.bmo.
https://globau.wordpress.com/2015/09/03/happy-bmo-push-day-159/
|
Emma Irwin: Mozilla Thimble – Test Drive of Happiness |
In the last few years I’ve learned a ton about what helps people learn, where they get stuck and how to customize learning for various ages, interests and attention-spans. When ‘teaching the web’ for kids as young as eight all the way up to university students there’s always some level of trouble-shooting and tinkering to do with tools I’ve tried so far (both on and offline). Mozilla’s Thimble had been one of those tools, but usually for the very early steps in learning. For more advanced lessons, I’ve tried a number of different solutions, all with some level of challenge. For example I turned more to codepen.io to show the separation of CSS/HTML & JS which was fun but only for super-short snippet-type learning. I also ventured offline with simple editors like Notepad ++ only to run into knowledge-blockers with students – around file-systems or computer permissions for installing new software.
And so, I was super-excited to see the latest version of Thimble released this week – especially after I did some testing. Here’s why I’m to go back to teaching with the (new) Thimble :
The new Thimble allows you to expand your code view, or your preview as you need. Seems small, but huge change from the previous version. With this, the brightness toggle, and text-size customization people will be able settle in to what works best for their learning. And thank you, thank you – the preview screen stays at the exact-scroll position for refreshes.
Many kids ask to make ‘apps’ in my classes, when often what they really mean is “something I can make and share on my phone”. So while the mobile view is obviously great from the perspective of learning to design for mobile, it also helps students understand the web as a platform for their app ideas. I imagine there’s more opportunity to extend that idea well beyond this as well with FFOS app preview perhaps.
A billion-times better. Students can now upload files they need vrs ‘all code in one page’, or link to external files, which with previous versions often resulted in mixed content errors. Yes, so much awesome, including the ability to re-name files names AND upload entire directories, which makes it easy ( I think ) for people to fork and upload projects. I managed to exceed the maximum file-size for upload, but at 5MB seems pretty reasonable. Having files lists also ‘bakes in’ an opportunity to teach file systems, best practices, naming conventions etc – which in the past was offline only. The only thing I couldn’t figure out, was how to download my project. Also next-wish : version control integration.
You can also take and upload ‘selfie’ images from your computer, which will be super-popular, especially for ‘photo booth’ type projects. Youth will love it, providing it passes the privacy agreements of students and schools – but then there’s a lesson to be made with this as well.
One of the biggest challenges and frustrations of getting things working – especially with younger kids is spelling mistakes of file names, attribute names – open tags, poorly nested tags… And so I’m thrilled to see suggestions & auto-complete as part of the new Thimble. Also showing which line has errors (without overwhelming popups) will be a huge help. I think there is also a way to use a color-wheel to add in hex colors (also helpful for younger learners), but I didn’t have a chance to test that.
Adding a tutorial.html file adds a ‘Tutorial’ view pane. I usually write my lessons in Google Docs, print and then give to students who are still learning to type, and so spend a lot of time looking from one to the other. Huge win that instructors can write tutorials as part of the lesson, and that students can keep their eyes on the screen instead of bothering with a second set of instructions. The only improvement I could ask for, would be the ability to assign specific tutorials, to files to create true lesson plans vrs one long file (also more value for sharing).
I’m sure there are a bunch of things I missed, but these are the wins for my classes.
Congratulations, and thank you to the Webmaker team, this is going to make things so so so much easier, and more rewarding for students and teachers.
On a separate note – I can’t help but think this would also help some of the curriculum development I’m working on – asking teams to develop content in Markdown. I see there is a Markdown extension for Brackets, and wonder if Thimble can take on a new file type ‘markdown’ to help educators submit curriculum without coding knowledge. Perhaps this is what that’s the potential for the tutorial file (and collaboration between educators and technologists)
|
Mozilla WebDev Community: Node.js static file build steps in Python Heroku apps |
I write a lot of webapps. I like to use Python for the backend, but most
frontend tools are written in Node.js. LESS gives me nicer style sheets, Babel
lets me write next-generation JavaScript, and NPM helps manage dependencies
nicely. As a result, most of my projects are polyglots that can be difficult to
deploy.
Modern workflows have already figured this out: Run all the tools. Most
READMEs I’ve written lately tend to look like this:
$ git clone https://github.example.com/foo/bar.git
$ cd git
$ pip install -r requirements.txt
$ npm install
$ gulp static-assets
$ python ./manage.py runserver
I like to deploy my projects using Heroku. They take care of the messy details
about deployment, but they don’t seem to support multi-language projects easily.
There are Python and Node buildpacks, but no clear way of combining the two.
GitHub is littered with attempts to fix this by building new buildpacks.
The problem is they invariable fall out of compatibility with Heroku. I could
probably fix, but then I’d have to maintain them. I use Heroku to avoid
maintaining infrastructure; custom buildpacks are one step forward, but two
steps back.
Enter Multi Buildpack, which runs multiple buildpacks at once.
It is simple enough that it is unlike to fall out of compatibility. Heroku has a
fork of the project on their GitHub account, which implies that it will be
maintained in the future.
To configure the buildpack, first tell Heroku you want to use it:
$ heroku buildpacks:set https://github.com/heroku/heroku-buildpack-multi.git
Next, add a .buildpacks
file to your project that lists the buildpacks to run:
https://github.com/heroku/heroku-buildpack-nodejs.git
https://github.com/heroku/heroku-buildpack-python.git
Buildpacks are executed in the order they’re listed in, allowing later
buildpacks to use the tools and scripts installed by earlier buildpacks.
There’s one problem: The Python buildpack moves files around, which makes it
incompatible with the way the Node buildpack installs commands. This means that
any asset compilation or minification done as a step of the Python buildpack
that depends on Node will fail.
The Python buildpack automatically detects a Django project and runs
./manage.py collectstatic
. But the Node environment isn’t available, so this
fails. No static files get built.
There is a solution: bin/post_compile
! If present in your repository, this
script will be run at the end of the build process. Because it runs outside of
the Python buildpack, commands installed by the Node buildpack are available and
will work correctly.
This trick works with any Python webapp, but lets use a Django project as an
example. I often use Django Pipeline for static asset compilation. Assets
are compiled using the command ./manage.py collectstatic
, which, when properly
configured, will call all the Node commands.
#!/bin/bash
export PATH=/app/.heroku/node/bin:$PATH
./manage.py collectstatic --noinput
Alternatively, you could call Node tools like Gulp or Webpack directly.
In the case of Django Pipeline, it is also useful to disable the Python
buildpack from running collectstatic
, since it will fail anyways. This is done
using an environment variable:
heroku config:set DISABLE_COLLECTSTATIC 1
Okay, so there is a little hack here. We still had to append the Node binary
folder toPATH
. Pretend you didn’t see that! Or don’t, because you’ll need
to do it in your script too.
To recap, this approach:
Woot!
https://blog.mozilla.org/webdev/2015/09/02/node-js-static-file-build-steps-in-python-heroku-apps/
|
Mozilla Reps Community: ReMoCamp and the Leadership Summit |
As part of the plan for Global Gatherings the Participation team announced a new event called the “Leadership Summit”. We’re excited to have ReMoCamp be integrated at this event and to bring together a bigger set of leaders. Our initial plan was to have this event happen this year, just as we have done in the past with ReMoCamp, but we decided to move it to January to have more time to prepare a fantastic event.
The summit doesn’t stand alone, it is one part of broader initiative with MozFest and All Hands, with each gathering playing different roles in empowering communities. Learning more about your goals at Mozilla can help you understand which of these events is the best one for you.
The summit will be an opportunity for local and regional communities to work alongside Reps, and for all of us to define what leadership means in the context of Mozilla. We are fully aware that the the words “leadership” and “leader” spark some discussion in the context of Mozilla. And we are actively working toward a framework for investing in leadership within our communities. We need all of our current leaders to help us define and shape a leadership culture that is true to our values and will unleash the potential of our communities.
Historically Reps has been a platform for Mozillians to be empowered, and in many cases Reps empowered their communities. It evolved into a broader leadership platform for communities, and one of our more organized ones in Mozilla. We see everyday how the magic of great community leadership ignites people all around the world to join our cause. We are more than ever committed to supporting and elevating our leaders and Reps is an important program for doing so.
Because Mozilla values leadership that can come from all the edges and we believe that empowering individuals and communities is our secret weapon, we want to make sure that local communities and Reps are connected, that anyone, on any edge, can be empowered. Our goal is that the leadership summit will catalyze the energy from Reps and other leaders and ensure we are all working as a powerful team.
Apart from bringing a different (and a bigger!) group of leaders together, we also believe that thinking about leadership and impact in the bigger picture that includes local communities will ultimately help the Reps program evolve to serve much better our communities. We will get “outside of the Reps box” and think holistically how to organize ourselves to have more impact. And of course we need Reps mentors, but also new Reps and local leaders to be part of these conversations. And as you know we’ll be doing this in Singapore, a location close to where our most thriving communities are right now.
We hope to learn a lot from this first event to be able to bring that experience and learning to many, if not all, volunteer leaders. And as with all we do: the leadership summit is an initiative that is open and that you can help shape. If you feel strongly about leadership, about regional or local communities, about empowering others or if you have ideas on what we need, just drop me or the participation team a line. We are eager to hear your thoughts and understand how to make this leadership summit the catalyzer we all want to see.
https://blog.mozilla.org/mozillareps/2015/09/02/remocamp-and-the-leadership-summit/
|
John O'Duinn: The USENIX Release Engineering Summit, Nov 2015 |
The USENIX Release Engineering Summit 2015 (“URES15'') is quickly approaching – this time it will be held in November in Washington DC along with LISA2015. To register to attend URES15, click here or on the logo.
Given the two (!) great URES conferences were last year, I expect URES15 to be fun and informative and very-down-to-earth practical all at the same time. One of the great things about these RelEng conferences is that you get to hear what did, and did not, work for others working the same front lines – all in a factual constructive way. Sharing important ideas help raise the bar for everyone, so every one of these feel priceless. It is also a great way to meet many other seasoned people in this niche part of the computer business,
If you are a Release Engineer, Site Reliability Engineer, Production Operations, deal with DevOps culture or are someone who keeps wanting to find a better way to reliably ship better quality code, you should attend! You’ll be glad you did!
Note: If you have a project that you worked on which others would find informative, you can submit a proposal here. The deadline for proposals is Friday 04sep2015.
See you there!
http://oduinn.com/blog/2015/09/02/the-usenix-release-engineering-summit-nov-2015/
|
Mike Ratcliffe: Installing VPN from the Linux command line |
To stop people snooping on what we are doing on the internet or when restrictive governments prevent people from accessing popular websites you can use a VPN to protect your privacy and allow you to access the internet without restrictions.
Most VPN providers provide simple step-by-step instructions to get VPN set up on most operating systems but many fail to give instructions about how to autostart a VPN in a non graphical environment.
The following will work with the files provided by most VPN providers with a couple of small changes:
Of course, the url that your provider uses may be different:
$ cd /tmp
$ wget https://www.privateinternetaccess.com/openvpn/openvpn.zip
$ unzip openvpn.zip
$ cp ca.crt /etc/openvpn/
$ cp crl.pem /etc/openvpn/
$ vi /etc/openvpn/login.conf
The file should contain the following:
your-privateinternetaccess-username
your-privateinternetaccess-password
$ chmod 400 /etc/openvpn/login.conf
The .ovpn files may not be by country so substitute them as appropriate.
$ vi Country.ovpn
Prefix the pem and crt paths with /etc/openvpn/
Change the auth-user-pass line to read auth-user-pass /etc/openvpn/login.conf
$ cp Country.ovpn /etc/openvpn/Country.conf
"Country" is the name of your .conf file without and extension so in my case Country.conf (see the previous steps).
$ vi /etc/default/openvpn
Add AUTOSTART="Country"
$ reboot
To check if the VPN is running open lynx and go to google.com. It should take you to the specific countries site.
That is it, you are done!
http://flailingmonkey.com/installing-vpn-from-the-linux-command-line/
|
Mozilla Addons Blog: Add-ons Update – Week of 2015/09/02 |
I post these updates every 3 weeks to inform add-on developers about the status of the review queues, add-on compatibility, and other happenings in the add-ons world.
The unlisted queues aren’t mentioned here, but they are empty for the most part (there are actually a couple hundred add-ons awaiting review there but they are awaiting a bulk-review tool that is being worked on, since they belong to a couple of large sets of almost identical add-ons). We’re in the process of getting more help to reduce queue length and waiting times for all queues.
If you’re an add-on developer and would like to see add-ons reviewed faster, please consider joining us. Add-on reviewers get invited to Mozilla events and earn cool gear with their work. Visit our wiki page for more information.
The compatibility blog post has been up for a while. The compatibility bump should be run soon.
Expect the blog post to come up sometime next week.
As always, we recommend that you test your add-ons on Beta and Firefox Developer Edition (formerly known as Aurora) to make sure that they continue to work correctly. End users can install the Add-on Compatibility Reporter to identify and report any add-ons that aren’t working anymore.
As we announced before, there’s a new add-ons community forum for all topics related to AMO or add-ons in general. The old forum is now gone, and just redirects to the new one.
The wiki page on Extension Signing has information about the timeline, as well as responses to some frequently asked questions. The new add-on installation UI and signature warnings are now enabled in release versions of Firefox.
Electrolysis, also known as e10s, is the next major compatibility change coming to Firefox. In a nutshell, Firefox will run on multiple processes now, running content code in a different process than browser code. This should improve responsiveness and overall stability, but it also means many add-ons will need to be updated to support this.
If you read Kev’s post on the future of add-on development, you should know there are big changes coming. We’re investing heavily on the new WebExtensions API, so we strongly recommend that you start looking into it for your add-ons. If you have requests for new APIs, please suggest them in the uservoice forum.
https://blog.mozilla.org/addons/2015/09/02/add-ons-update-70/
|
Air Mozilla: Quality Team (QA) Public Meeting |
This is the meeting where all the Mozilla quality teams meet, swap ideas, exchange notes on what is upcoming, and strategize around community building and...
https://air.mozilla.org/quality-team-qa-public-meeting-20150902/
|
Air Mozilla: Product Coordination Meeting |
Duration: 10 minutes This is a weekly status meeting, every Wednesday, that helps coordinate the shipping of our products (across 4 release channels) in order...
https://air.mozilla.org/product-coordination-meeting-20150902/
|
Smokey Ardisson: Escaping a “stuck” iOS Numbers iCloud sync |
Periodically1 Numbers will get “stuck” trying to sync to iCloud changes I’ve just made on my iPhone. Numbers will claim that the spreadsheet is updating, or needs to be updated, but no changes will ever reach iCloud and the “needs update” UI will never go away. This used to be a frequent issue,2 and over time I tried all sorts of things to resolve the problem: quitting Numbers, making another local change, making a change on iCloud, waiting a while, toggling Airplane mode on and off to “kill” networking, turning iCloud off in the Numbers settings, deleting the spreadsheet (both on the phone and also on iCloud), and other things I can no longer remember. Some combination of tinkering always seemed to fix things, but it was never anything easy—always a tangled combination of things—and The Internet™ didn’t seem to have any relevant information when I searched.
Eventually I discovered a reliable, repeatable, simple solution: turn the iPhone off and back on again,3 and the next time I open Numbers, it will happily finish syncing my changes from the iPhone to iCloud.
Hopefully the All-Seeing Internet Search Engines™ will index this post and offer it up to others who might also be searching for a solution.
Has anyone out there already filed a Radar on the “stuck” iCloud sync process? I’ll eventually get around to filing it myself.
1 It’s difficult to tell what might be triggering the “stuck” sync, but my best guess based on when and where it happens to me is a marginal network connection or network transition.
http://www.ardisson.org/afkar/2015/09/02/escaping-a-stuck-ios-numbers-icloud-sync/
|
Karl Dubost: WebCompat.com Just got Modern Cathode Ray Tube Screenshots |
When reporting bugs for Web Compatibility, it is useful to see what the person is reporting. Often an image helps right away to identify what is wrong in the page. "Something is wrong in the page" can be sometimes difficult to understand.
Mike Taylor last week just released a feature we wanted for a long time: uploading a screenshot of the issue.
Let's go through an example. This site clearly exhibits an issue in the banner the size of the blue top button and the menu which is gone from the main viewport.
When reporting the issue about this site on webcompat.com, we want to upload a screenshot to show it.
We could upload a full screenshot or just the part which shows the issue. There is a very practical feature in Firefox Developer Tools: Screenshot Node.
Go to the inspector select the DOM node where the issue is visible and call the contextual menu. In the list of features, you will discover: Screenshot Node.
Once done, you will get in the usual browser download folder, the screenshot which has just been taken.
We can now head to WebCompat.com to report the issue.
At the bottom of the form there is a "Attach a screenshot image". Once done, our report is fully complete with the screenshot of the node.
And if you are interested by the bug it is probably a flexbox bug using the old syntax.
Otsukare!
http://www.otsukare.info/2015/09/02/webcompat-upload-screenshot
|
Daniel Stenberg: The TLS trinity dance |
In the curl project we currently support eleven different TLS libraries. That is 8 libraries and the OpenSSL “trinity” consisting of BoringSSL, libressl and of course OpenSSL itself.
You could easily be mislead into believing that supporting three libraries that all have a common base would be really easy since they have the same API. But no, it isn’t. Sure, they have the same foundation and they all three have more in common that they differ but still, they all diverge in their own little ways and from my stand-point libressl seems to be the one that causes us the least friction going forward.
Let me also stress that I’m but a user of these projects, I don’t participate in their work and I don’t have any insights into their internal doings or greater goals.
Easy-peacy, very similar to OpenSSL. The biggest obstacle might be that the version numbering is different so an old program that might be adjusted to different OpenSSL features based on version numbers (like curl was) needs some adjusting. There’s a convenient LIBRESSL_VERSION_NUMBER define to detect libressl with.
I regularly build curl against OpenSSL from their git master to get an early head-start when they change things and break backwards compatibility. They’ve increased that behavior since Heartbleed and while I generally agree with their ambitions on making more structs opaque instead of exposing all internals, it also hurts us over and over again when they remove things we’ve been using for years. What’s “funny” is that in almost all cases, their response is “well use this way instead” and it has turned out that there’s an equally old API that is still there that we can use instead. It also tells something about their documentation situation when that is such a common pattern. It’s never been possible to grasp this from just reading docs.
BoringSSL has made great inroads in the market and is used on Android now and more. They don’t do releases(!) and have no version numbers so the only thing we can do is to build from git and there’s no install target in the makefile. There’s no docs for it, they remove APIs from OpenSSL (curl can’t support NTLM nor OCSP stapling when built with it), they’ve changed several data types in the API making it really hard to build curl without warnings. Funnily, they also introduced non-namespaced typedefs prefixed with X509_* that collide with other common headers.
A while ago we noticed BoringSSL had removed the DES_set_odd_parity function which we use in curl. We changed the configure script to look for it and changed the code to survive without it. The lack of that function then also signaled that it wasn’t OpenSSL, it was BoringSSL
BoringSSL moved around things that caused our configure script to no longer detect it as “OpenSSL compliant” because CRYPTO_lock could no longer be found by configure. We changed it to instead search for HMAC_Init and we were fine again.
Time passed and BoringSSL brought back DES_set_odd_parity, so our configure script no longer saw it as BoringSSL (the Android fixed this problem in their git but never sent as the fix). We changed the configure script accordingly to properly use OPENSSL_IS_BORINGSSL instead to detect BoringSSL which was the correct thing anyway and now as a bonus it can thus detect and work with both new and old BoringSSL versions.
A short time after, I again try to build curl against the OpenSSL master branch only to realize they’ve deprecated HMAC_Init that we just recently switched to for detection (since the configure script needs to check for a particular named function within a library to really know that it has detected and can use said library). Sigh, we switched “detect function” again to HMAC_Update. Hopefully this exists in all three and will stick around for a while…
Right now I think we can detect and use all three. It is only a matter of time until one of them will ruin that and we will adapt again.
http://daniel.haxx.se/blog/2015/09/02/the-tls-trinity-dance/
|
Monty Montgomery: Comments on the Alliance for Open Media, or, "Oh Man, What a Day" |
I assume folks who follow video codecs and digital media have already noticed the brand new Alliance for Open Media jointly announced by Amazon, Cisco, Google, Intel, Microsoft, Mozilla and Netflix. I expect the list of member companies to grow somewhat in the near future.
One thing that's come up several times today: People contacting Xiph to see if we're worried this detracts from the IETF's NETVC codec effort. The way the aomedia.org website reads right now, it might sound as if this is competing development. It's not; it's something quite different and complementary.
Open source codec developers need a place to collaborate on and share patent analysis in a forum protected by client-attorney privilege, something the IETF can't provide. AOMedia is to be that forum. I'm sure some development discussion will happen there, probably quite a bit in fact, but pooled IP review is the reason it exists.
It's also probably accurate to view the Alliance for Open Media (the Rebel Alliance?) as part of an industry pushback against the licensing lunacy made obvious by HEVCAdvance. Dan Rayburn at Streaming Media reports a third HEVC licensing pool is about to surface. To-date, we've not yet seen licensing terms on more than half of the known HEVC patents out there.
In any case, HEVC is becoming rather expensive, and yet increasingly uncertain licensing-wise. Licensing uncertainty gives responsible companies the tummy troubles. Some of the largest companies in the world are seriously considering starting over rather than bet on the mess...
Is this, at long last, what a tipping point feels like?
Oh, and one more thing--
As of today, just after Microsoft announced its membership in the Open Media Alliance, they also quietly changed the internal development status of Vorbis, Opus, WebM and VP9 to indicate they intend to ship all of the above in the new Windows Edge browser. Cue spooky X-files theme music.
|
Smokey Ardisson: Quoting John Gruber |
Their old logo was goofy. This new one is simply garbage.
The redeeming quality of the previous Google logos was the whimsy, while the serifs kept them from looking too childlike or immature.
The new logo, however, is simply puerile. Someone needs to take away the Play-Doh.
http://www.ardisson.org/afkar/2015/09/02/quoting-john-gruber/
|
About:Community: Participation Lab Notes: Short Simple Tasks Increase Engagement |
Every year, about 7 million people come to the contribute page on Mozilla.org looking for information about how to get involved with Mozilla. These visitors represent an exciting opportunity to increase the number of long-term relationships Mozilla has with people who are passionate about the open web.
Right now, however, only about 0.76% of those people ever register to contribute. Even once they’re registered, how we connect them to their contribution area of interest is far from optimized.
In partnership with the Mozilla.org team, the Participation Team has set out on a medium-term project to try and maximize the potential of this page. Our aim is to make it a better tool for potential contributors, Mozilla, and the mission.
This first experiment set out to figure out how to increase engagement with tasks presented to a visitor after they said they were interested in contributing. We designed and ran an A/B test over a few weeks, presenting visitors with different types of tasks.
Keep reading to learn more about the results and the future iterations and experiments we have planned.
In this A/B test we created four new versions of the contribute sign-up page. Each page presented viewers with a number of tasks categorized either as simple/challenging (variation a) or as taking a little time/more time (variation b). For each of these variations we displayed either 2 or 6 tasks per page.
For the purpose of this first test we did not track engagement past choosing the specific task, and we did not track engagement with individual tasks, measuring only the category of the tasks selected (eg. simple/challenging or little/more time).
The major findings from this first round of testing were that visitors to the contribute page preferred simple tasks to challenging tasks, and shorter tasks to long tasks. We also found that presenting more options (6) increased the number of people who chose to pursue a task instead of selecting the “not ready” button. Finally across all of the page variations we found the percentage of people who engaged with the pages (either by choosing a task or selecting not-ready) was 60%, much higher than the 15% engagement rate of the original page.
Based on all of the results we can infer that majority of visitors to the contribute page prefer easier, low-barrier tasks, and prefer a handful of choices to just 2.
However there were still a minority of individuals who expressed a preference for more challenging tasks — they may represent skilled contributors who are ready to become deeply engaged in complex projects. In coming iterations of this experiment we will be exploring the level of readiness and skill of these contributors, and whether individuals from this group are more likely to become core contributors than those who selected simpler tasks.
Overall, the enormous engagement rate by both groups further supports the amazing opportunity latent in this page.
Over the coming months we will undertake several rapid iterations on this test. There is a great deal still to be learned, however we believe that through experimenting with participation in venues like this we can unlock huge potential to help people engage with Mozilla in high impact ways.
Over the next few weeks the Participation team will be working to design a second experiment and we will focus on:
Other topics we will seek to explore in the longer-term include:
This project is currently owned by Lucy from the Participation Team in conjunction with the Mozilla.org Team. If you have any questions, ideas for future tests, or if you just want to chat about the future of this page please do not hesitate to get in touch!
|
Air Mozilla: Webdev Extravaganza: September 2015 |
Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on.
|
The Mozilla Blog: Forging an Alliance for Royalty-Free Video |
Things are moving fast for royalty-free video codecs. A month ago, the IETF NETVC Working Group had its first meeting and two weeks ago Cisco announced Thor. Today, we’re taking the next big step in this industry-wide effort with the formation of the Alliance for Open Media. Its founding members represent some of the biggest names in online video, such as Netflix, Amazon, and YouTube, multiple browser vendors including Mozilla, Microsoft, and Google, and key technology providers like Cisco and Intel. The Alliance has come together to share technology and run the kind of patent analysis necessary to build a next-generation royalty-free video codec.
Mozilla has long championed royalty-free codecs. The Web was built on innovation without asking permission, and patent licensing regimes are incompatible with some of the Web’s most successful business models. That’s why we already support great codecs like VP8, VP9, and Opus in Firefox. But the Web doesn’t stand still and neither do we. As resolutions and framerates increase, the need for more advanced codecs with ever-better compression ratios will only grow. We started our own Daala project and formed NETVC to meet those needs, and we’ve seen explosive interest in the result. We believe that Daala, Cisco’s Thor, and Google’s VP10 combine to form an excellent basis for a truly world-class royalty-free codec.
In order to allow us to move quickly, the alliance is structured as a Joint Development Foundation project. These are an ideal complement to a larger, open standards organization like the IETF: One of the biggest challenges in developing open standards in a field like video codecs is figuring out how to review the patents. The Alliance provides a venue for us to share the legal legwork without having to worry about it being used against us down the road. That distributes the load, allows us to innovate faster and cheaper, and gives everyone more confidence that we are really producing a royalty-free codec.
The Alliance will operate under W3C patent rules and release code under an Apache 2.0 license. This means all Alliance participants are waiving royalties both for the codec implementation and for any patents on the codec itself. The initial members are just a start. We invite anyone with an interest in video, online or off, to join us.
For further information please visit www.aomedia.org or view the press release.
https://blog.mozilla.org/blog/2015/09/01/forging-an-alliance-for-royalty-free-video/
|
Air Mozilla: Martes mozilleros |
Reuni'on bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos.
|