Adam Lofting: Getting Bicho Running as a process on Heroku with a Scheduler |
I recently installed a local copy of Bicho, and ran this against some products on Bugzilla to test it out. It generates a nicely structured relational database including the things I want to count and feed into our contributor numbers.
This morning I got this running on Heroku, which means it can run periodically and update a hosted DB, which can then feed numbers into our dashboard.
This was a bit trial and error for me as all the work I’ve done with Python was within Google App Engine’s setup, and my use of Heroku has been for Node apps, so these notes are to help me out some time in the future when I look back to this.
$ pip freeze
generates a list of the requirements from your working localenv e.g.
BeautifulSoup==3.2.1
MySQL-python==1.2.5
feedparser==5.1.3
python-dateutil==2.2
six==1.6.1
storm==0.20
wsgiref==0.1.2
Copy this into a requirements.txt
file in the root of your project
But remove the line: Bicho==0.9
(or it tries to install this via pip, which fails)
Heroku’s notes on specifying dependencies.
You can now push this to Heroku.
Then, I ran:
$ heroku run python setup.py
But I’m actually not sure if that was required.
Then you can run Bicho remotely via heroku run
commands
$ heroku run python bin/bicho --db-user-out=yourdbusername --db-password-out=yourdbuserpassword --db-database-out=yourdbdatabase --db-hostname-out=yourdbhostname -d 5 -b bg --backend-user 'abugzilla@exampleuser.com' --backend-password 'bugzillapasswordexample' -u 'https://bugzillaurl.com?etc'
As a general precaution for anything like this, don’t use a user account that has any special privileges. I create duplicate logins that have the same level of access available to any member of the public.
Once you’ve got a command that works here, cancel the running script as it might have thousands of issues left to process.
Then setup a scheduler https://devcenter.heroku.com/articles/scheduler
$ heroku addons:add scheduler:standard
$ heroku addons:open scheduler
copy your working command into the scheduler just without the ‘heroku run’ part
python bin/bicho --db-user-out=yourdbusername --db-password-out=yourdbuserpassword --db-database-out=yourdbdatabase --db-hostname-out=yourdbhostname -d 5 -b bg --backend-user 'abugzilla@exampleuser.com' --backend-password 'bugzillapasswordexample' -u 'https://bugzillaurl.com?etc'
If you set this to run every 10 mins, the process will cycle and get killed periodically but in the logs this usefully shows you how the import is progressing.
–
I’m generally happy with this as a solution for counting contributors in Webmaker’s issue tracking history, but would need to work on some speed issues if this was of interest across Mozilla projects.
Currently, this is importing about 400 issues an hour, which would be problematic to process 1,000,000+ bugs in bugzilla.mozilla.org. But that’s not a problem to solve right now. And not necessarily the way you’d want to do that either.
http://feedproxy.google.com/~r/adamlofting/blog/~3/tCFTQVu3SZs/
|
Asa Dotzler: Flame – Firefox OS Reference Phone |
Attending the Mozilla Summit 2013 and talking with our community about the exciting future of Firefox OS, the one concern I heard voiced most often was the difficulty of participating in Firefox OS. It was simply out of reach to most people I talked to.
No doubt, Firefox OS is an open source project. The code’s been there since before day 1. But access to source code does not a successful open source project make. Firefox OS felt to many I’ve spoken with to be far less participatory than Firefox the browser. The primary reasons for this, IMO, are the lack of widely available Firefox OS hardware and regular Firefox OS testing binaries.
With Firefox the browser, anyone anywhere in the world can download and get updates for a Mozilla-hosted binary of Firefox on a daily basis. They can download the active development “Nightly” build to see the changes that landed in the browser since yesterday. They can also download and get updates for the more stable “Aurora” and “Beta” channels. And because Firefox runs on Mac, Windows, and Linux, most people had no problem trying it out on their existing computers.
Firefox the browser is easy to try out. Firefox OS is not.
With Firefox OS, only Mozilla employees or employees of Mozilla partners can download development builds and get updates. This is because there are a few pieces down in the phone’s software stack that are not Mozilla’s code and for which Mozilla doesn’t have license to distribute. And, even if our community could download development builds of Firefox OS, the hardware to put that on has been very limited. If you’re not in a region with a Mozilla partner shipping Firefox OS phones, your options were very limited.
Mozilla tried to hack around the problems a couple of times over the last year, with limited success.
Flame is Mozilla’s investment in solving those problems for real.
Mozilla partnered with a company called Thundersoft to design, build, and update a phone that contains all of the hardware we’re targeting in the next year or so. Thundersoft has already made and delivered 2,000 Flame phones to Mozilla and we are rolling those out now to the core Firefox OS teams and community. We’ve got the retail site up which is selling an additional 5,000 Flame phones, and we’re hard at work on making nightly builds available to these devices.
The Flame phone won’t fix everything, but it should go a long way towards empowering our community to participate in Firefox OS and to generate the kinds of community impact on scale that we’ve seen in our Firefox browser community for years.
http://asadotzler.com/2014/05/28/flame-firefox-os-reference-phone/
|
Armen Zambrano: How to create local buildbot slaves |
virtualenv ~/venvs/buildbot-slaveNOTE: You can figure out what to install by looking in here: http://hg.mozilla.org/build/puppet/file/ad32888ce123/modules/buildslave/manifests/install/version.pp#l19
source ~/venvs/buildbot-slave/bin/activate
pip install zope.interface==3.6.1
pip install buildbot-slave==0.8.4-pre-moz2 --find-links http://pypi.pub.build.mozilla.org/pub
pip install Twisted==10.2.0
pip install simplejson==2.1.3
buildslave create-slave /builds/build_slave localhost:9000 bld-linux64-ix-060 pass
buildslave create-slave /builds/test_slave localhost:9001 tst-linux64-ec2-001 pass
source ~/venvs/buildbot-slave/bin/activate
buildslave start /builds/build_slave
buildslave start /builds/test_slave
|
Marco Bonardo: Bookmarks backups respin |
Part of the performance improvements we planned for Places, the history, bookmarking and tagging subsystem of Firefox, involved changes to the way we generate bookmarks backups.
As you may know, Firefox stores almost everyday a backup of your bookmarks into the profile folder, as a JSON file. The process, so far, had various issues:
The first step was to reorganize the code and APIs, Raymond Lee and Andres Hernandez took care of most of this part. Most of the existing code was converted to use the new async tools (like Task, Promises and Sqlite.jsm) and old synchronous APIs were deprecated with loud console warnings.
The second step was dedicated to some user-facing improvements. We must ensure the user can revert to the best possible status, but it was basically impossibile to distinguish backups created before a corruption from the ones created after it, so we added the size and bookmarks count to each backup. They are now visible in the Library / Restore menu.
Once we had a complete async API exposed and all of the internal consumers were converted to it, we could start rewriting the internals. We replaced the expensive Places result with a direct async SQL query reading the whole bookmarks tree at once. Raymond started working on this, I completed and landed it and, a few days ago, Asaf Romano further improved this new API. It is much faster than before (initial measurements have shown a 10x improvement) and it's also off the main-thread.
Along the process I also removed the backups-on-shutdown code, in favor of an idle-only behavior. Before this change we were trying to backup on an idle of 15 minutes, if we could not find a long enough interval we were enforcing a backup on shutdown. This means, in some cases, we were delaying the browser shutdown by seconds. Currently we look for an 8 minutes idle, if after 3 days we could not find a long enough interval, we cut the idle interval to 4 minutes (note: we started from a larger time and just recently tweaked it based on telemetry data).
At that point we had an async backups system, a little bit more user-friendly and doing less I/O. But we still had some issues to resolve.
First, we added an md5 hash to each backup, representing its contents, so we can avoid replacing a still valid backup, thus providing more meaningful backups to the user and reducing I/O considerably.
Then the only remaining piece of work was to reduce the footprint of backups in the profile folder, both for space and I/O reasons. Luckily we have an awesome community! Althaf Hameez, a community member, volunteered to help us completing this work. Bug 818587 landed recently, providing lz4-like compression to the backups: automatic backups are compressed and have .jsonlz4 extension, while manual backups are still plain-text, to allow sharing them easily with third party software or previous versions.
Finally, I want to note that we are now re-using most of the changed code also for bookmarks.html files. While these are no more our main exporting format, we still support them for default bookmarks import and bookmarks exchange with third party services or other browsers. So we obtained the same nice perf improvements for them.
Apart from some minor regressions, that are currently being worked on by Althaf himself, who kindly accepted to help us further, we are at the end of this long trip. If you want to read some more gory details about the path that brought us here, you can sneak into the dependency tree of bug 818399. If you find any bugs related to bookmark backups, please file a bug in Toolkit / Places.
http://blog.bonardo.net/2014/05/28/bookmarks-backups-redesign
|
Pete Moore: Weekly review 2014-05-28 |
Have been working on:
Bug 1013885 – inband1.r202-4.console.scl3.mozilla.net is DOWN :PING CRITICAL - Packet loss = 100%
Bug 1004570 – Jacuzzi for Update Verify, so we can use a persistent cache across jobs
Bug 994905 – tst-linux64-ec2-dminor problem tracking
Bug 978928 – Reconfigs should be automatic, and scheduled via a cron job
Bug 947202 – (bld-lion-r5-086) bld-lion-r5-086 problem tracking
Bug 937732 – Tracker bug: HG local disk migration
Bug 913870 – Intermittent panda “Dying due to failing verification”
Bug 910745 – Third party repositories listed in b2g-manifest should always reference a tag/revision
Bug 905742 – Provide B2G Emulator builds for Darwin x86
Bug 847640 – db-based mapper on web cluster
Bug 825889 – (bld-lion-r5-087) bld-lion-r5-087 problem tracking
Bug 803087 – (bld-centos6-hp-019) bld-centos6-hp-019 problem tracking
Bug 771560 – (tegra-073) tegra-073 problem tracking
Bug 751962 – (tegra-028) [staging] tegra-028 problem tracking
|
Pete Moore: Weekly review 2015-05-28 |
Have been working on:
Bug 1013885 – inband1.r202-4.console.scl3.mozilla.net is DOWN :PING CRITICAL - Packet loss = 100%
Bug 1004570 – Jacuzzi for Update Verify, so we can use a persistent cache across jobs
Bug 994905 – tst-linux64-ec2-dminor problem tracking
Bug 978928 – Reconfigs should be automatic, and scheduled via a cron job
Bug 947202 – (bld-lion-r5-086) bld-lion-r5-086 problem tracking
Bug 937732 – Tracker bug: HG local disk migration
Bug 913870 – Intermittent panda “Dying due to failing verification”
Bug 910745 – Third party repositories listed in b2g-manifest should always reference a tag/revision
Bug 905742 – Provide B2G Emulator builds for Darwin x86
Bug 847640 – db-based mapper on web cluster
Bug 825889 – (bld-lion-r5-087) bld-lion-r5-087 problem tracking
Bug 803087 – (bld-centos6-hp-019) bld-centos6-hp-019 problem tracking
Bug 771560 – (tegra-073) tegra-073 problem tracking
Bug 751962 – (tegra-028) [staging] tegra-028 problem tracking
|
Zack Weinberg: a small dispatch from the coalface |
category | count | % |
---|---|---|
total | 5 838 383 | 100.000 |
ok | 2 212 565 | 37.897 |
ok (redirected) | 1 999 341 | 34.245 |
network or protocol error | 798 231 | 13.672 |
timeout | 412 759 | 7.070 |
hostname not found | 166 623 | 2.854 |
page not found (404/410) | 110 241 | 1.888 |
forbidden (403) | 75 054 | 1.286 |
service unavailable (503) | 18 648 | .319 |
server error (500) | 15 150 | .259 |
bad request (400) | 14 397 | .247 |
authentication required (401) | 9 199 | .158 |
redirection loop | 2 972 | .051 |
proxy error (502/504/52x) | 1 845 | .032 |
other HTTP response | 1 010 | .017 |
crawler failure | 329 | .006 |
syntactically invalid URL | 19 | .000 |
Sorry about the non-tabular figures.
https://www.owlfolio.org/research/a-small-dispatch-from-the-coalface/
|
Adam Lofting: Are we on track with our 2014 contributor goals? |
I presented a version of this on the Mozilla Foundation staff call yesterday, and thought it’s worth a write-up for those who weren’t on the call and those in MoCo working on related things.
One of the cross Mozilla goals this year is to “10X” the number of active contributors, with a longer term goal of growing to a million Mozillians.
When the 10X goal was set we weren’t really sure what X was, for valid reasons; defining contribution is as much art as it is science, and the work plans for this year include building the tools to measure this in a systematic way. The goals justify the tools, and vice versa. Chicken and egg.
2,000 contributors were invited to the summit, so the target was set at 20k active contributors shared between MoCo and MoFo. MoFo have been working to a target of 10k contributors but in practice this isn’t going to be a clean 50/50 split and there will be overlap in contributors across teams, projects and MoFo/MoCo. For example, 10k MoCo contributors + 10k MoFo contributors could = 19k Mozilla contributors.
When I joined in January, each of the MoFo teams did some (slightly tedious) manual counting and estimated their contributor numbers for 2013, and we added these up to a theoretical 5,600 contributors. This was our baseline number. Useful to an order or magnitude, but not precise.
This 5,600 number suggests that 10k contributors was quite far off 10X contributors based on these January estimates, but 10k is still going to be a challenging goal. At 10X we’d have been aiming for 50k+ contributors.
From the data that’s emerging, 10k active contributors to MoFo feels like a sane but stretching target.
With the recent forming of the Badge Alliance, some MoFo goals are now Badge Alliance goals, and the same goes for counting people contributing to parts of the Open Badge ecosystem. As a result, our theoretical 5,600 contributor number got smaller. It’s now 4,200.
So 4,200 is where we assumed we started this year, but we haven’t proved this yet. And realizing this measurement has been our priority metrics project this year.
We’ve been automating ways to count these ‘theoretical’ contributors, and feeding them into our dashboard.
But to-date as we’ve looked at the dashboard, and the provable number was 1,000 or 2,000 or so, we would then say “but the real number is actually closer to 5,000''. Which means the dashboard hasn’t been very useful yet, as the theoretical number always trumped the provable but incomplete number.
This will change in the next few weeks.
We’re now nearly counting ‘live’, all of those theoretical pots of contribution.
And the dashboard is at 2,800.
Once we add the Webmaker mentors who complete their training this year, and anything else that goes into the ad-hoc contribution logger, we’re basically at our real comparison point to that theoretical number, and we can drop the ‘theoretical’ bit.
If there’s a thousand mentors and another four hundred contributors added to the ad-hoc logger, our theoretical estimate will be remarkably close to reality. Except, that it’s six months behind where we thought it would be.
We’re getting close to that 4,200, but we expected (albeit pretty loosely) to be there in January.
This either means that:
(A) the growth shown on the graph to-date is an artifact of missing historical data, and we’re actually trending pretty flat.
(B) our 2013 estimates were too high and we started this year with fewer contributors than we thought, but we’ve been growing to date.
As we don’t have time-stamped historical data for some of these things, we’re not going to know which for sure. But either way, we now need to increase the rate at which we on-board new contributors to hit 10k by the end of the year.
There are plans in place for growing contribution numbers, but this is going to be active work.
Whether that’s converting new webmaker users who join us through Maker Party, or reducing barriers to contributing code or, actively going out and asking people if they want to contribute. Growing that contributor number is going to be a combination of good processes and marketing.
I’ll be making this MoFo total number smaller by X% when we integrate the data into a single location and de-dupe people across these activities. But we don’t know what X% is yet. That’s just something to be aware of.
In relation to the points on there not being a clear MoCo/MoFo split in where people contribute, we’re much more directly connecting up the systems and processes now. We’ll have more to share on this in the coming weeks.
http://feedproxy.google.com/~r/adamlofting/blog/~3/lgHY5osdpTw/
|
Alex Vincent: Compacting XUL interfaces? |
For my day job, I work at a startup, basically as an expert in Mozilla technologies. I love what I do, too. But whenever I do user-interface work, I frequently run into a simple problem: screen real estate.
Case in point, my latest work with XML entities and entity references on the Verbosio XML editor project. (I can’t really talk about details of my work-related code here, but I can talk about the pet project.) The demo panel for this, which doubles as a laboratory for my experiments, has four major sections. The upper left corner holds a CodeMirror instance for one DTD document. The upper right corner holds another CodeMirror instance, for a second DTD. The middle horizontal section holds a third CodeMirror instance, for the XML document that might load the DTD documents. Each of these has some ordinary XUL buttons on the right edge and a menulist, to help me control the individual sections. In the bottom third, I have a XUL deck which can toggle between an iframe and a XUL tree showing the document object model of the XML I’ve parsed. To the right of this XUL tree, I plan on adding a bit more, for showing the entities defined in the doctype or the entity references on a node.
All of this UI lives inside a tabbox. As you can see, it’s already a little crowded, though fortunately, it’s almost complete in terms of features I need.
I can’t do a fair comparison against the Mozilla Firefox user-interface; the main windows don’t have textboxes for source code or trees for DOM views. So their controls’ relative sizes don’t come close to mine: they’re much flatter.
The built-in developer tools, though, do have an elegance to them, and are a fair comparison. The right side panel, showing variables and events, can collapse away (and the animation’s pretty nice, too). The left side panel has a listbox (I think) of scripts to choose from, and when you select one (either in call stack or in sources), the equivalent source code appears in the center. Plus, they have some really tiny icon buttons in the UI, much smaller than the standard XUL buttons I use. The devtools UI gives you basically what you need and otherwise tries to get out of your way.
Dear lazyweb of Mozilla UI specialists: Can you point me to a developer.mozilla.org document with some guidelines for efficiently using the screen real estate? My user-interface works, but I need some tips & tricks for making the most of it. It could be simple stuff like shrinking buttons, or it could be creating new XBL bindings to masterfully present common ideas together. I’m not willing to go to HTML5's Canvas for this. But my experience has largely been in components and JavaScript, not in crafting UI’s…
Or maybe it’s time for me to re-read Jenifer Tidwell’s excellent book from 2006, “Designing Interfaces”. (I have the first edition.)
|
Daniel Stenberg: Crashed and recovered in no time |
Working from home, even writing software from home, my computer setup is pretty crucial for a productive work day.
Yesterday morning after I had sat down with my coffee and started to work on my latest patch iteration I noticed that some disk operations seemed to be very slow. I looked around and then suddenly an ‘ls’ of a directory returned an error!
I checked the system logs and I saw them filling up with error messages identifying problems with a hard drive. Very quickly I identified the drive as the bigger one (I have one SSD and one much larger HDD). Luckily, that’s the one I mostly store document, pictures and videos on and I backup that thing every night. This disk is not very old and I’ve never experienced this sort of disk crash before, not even with disks that I’ve used for many years more than I’ve used this…
I ripped the thing out, booted up again and I could still work since my source code and OS are on the SSD. I ordered a new one at once. Phew.
Tuesday morning I noticed that for some unexplainable reason I had my /var partition on the dead drive (and not backed up). That turned out to be a bit inconvenient since now my Debian Linux had no idea which packages I had installed and apt-get and dpkg were all crippled to death.
I did some googling and as my laptop is also a Debian sid install I managed to restore it pretty swiftly by copying over data from there. At least it (the /var contents) is now mostly back to where it was before.
On Tuesday midday, some 26 hours after I ripped out the disk, my doorbell bing-bonged and the delivery guy handed me a box with a new and shiny 3 TB drive. A couple of hours ago I inserted it, portioned it, read back a couple of hundred gigabytes of backup, put back the backup job in cron again and … yeah, I think I’m basically back to where I was before it went south.
All in all: saved by the backup. Not many tears. Phew this time.
http://daniel.haxx.se/blog/2014/05/28/crashed-and-recovered-in-no-time/
|
Jess Klein: Dear Massimo |
|
Planet Mozilla Interns: Mihnea Dobrescu-Balaur: Caching For the Win |
It seems like every now and then, while working on something, I get reminded that no matter how much you optimize any given process, most of the time, caching will give better results - even with no (other) optimizations.
When you think about it, it's pretty obvious - not doing any work at all is better than doing "some" work. Nevertheless, we still look out for web framework performance, NoSQL access times and whatnot. The Disqus guys have a great blog post on how they used Varnish to scale their systems to huge numbers while using a "slow" Python-based stack.
I got bit by this recently while working on a school assignment. We had to solve the Expedia ranking problem, working with some CSV data totalling about 15 million entries. To save some time by not having to parse CSV all the time, I decided to take advantage of today's technologies and use MongoDB to store the entries. Its document-oriented, schemaless approach made sense for the data, since there were a lot of missing values.
Starting to work on the problem, I had to deal with a fairly lengthy feedback loop, because, even with Mongo being webscale and all, processing took some time (I was using a laptop). To improve this, I started tweaking the interactions with the database, reduced the accessed data to the smallest subset possible and so on. I did not get very far - it was still too slow. Then, I zoomed out a bit and I realised that I've been doing it wrong from the start. I'm sure it's no surprise by now, I added a caching layer (in the form of memoization) and that yielded great results.
To sum up, when tackling a problem, don't disregard a simple solution for a shiny one. It might turn out to be just fool's gold.
|
Wladimir Palant: Proxies breaking up SSL connections? Yes, all the time... |
Some months ago I was wondering why some Firefox installations appear to not support strong encryption. After analyzing the SSL handshakes on one of the filter download servers used by Adblock Plus, I am now mostly confident that the reason is proxy servers essentially conducting Man-in-the-Middle (MitM) attacks. Normally, a proxy server can only forward SSL data to its destination, it can neither modify nor read the data due to encryption. MitM proxies however pose as the destination server which allows them to manipulate the data in any way they like. For that they have to encrypt the communication with a certificate that is valid for the destination server, usually this happens by installing a new root certificate on the client’s computer.
I used ssldump to record 3294 SSL handshakes. Not a terribly large sample, yet it already contained lots of entries where the client’s SSL support didn’t match the user agent from the web server logs:
If that sample is representative in any way, it would mean that roughly 0.5% of the internet users are behind a proxy server that will intercept their encrypted data. Why is that a problem?
For reference, how did I recognize the browsers in the ssldump output? This turned out to be pretty simple, each browser has its very distinct list of supported ciphers that it sends to the server. Conveniently, that’s the very first packet that ssldump will record for a connection. For Firefox 29 it looks like this:
ClientHello
Version 3.3
cipher suites
Unknown value 0xc02b
Unknown value 0xc02f
Unknown value 0xc00a
Unknown value 0xc009
Unknown value 0xc013
Unknown value 0xc014
Unknown value 0xc012
Unknown value 0xc007
Unknown value 0xc011
TLS_DHE_RSA_WITH_AES_128_CBC_SHA
TLS_DHE_DSS_WITH_AES_128_CBC_SHA
Unknown value 0x45
TLS_DHE_RSA_WITH_AES_256_CBC_SHA
TLS_DHE_DSS_WITH_AES_256_CBC_SHA
Unknown value 0x88
TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA
TLS_RSA_WITH_AES_128_CBC_SHA
Unknown value 0x41
TLS_RSA_WITH_AES_256_CBC_SHA
Unknown value 0x84
TLS_RSA_WITH_3DES_EDE_CBC_SHA
TLS_RSA_WITH_RC4_128_SHA
TLS_RSA_WITH_RC4_128_MD5
Chrome 35 sends a different list:
ClientHello
Version 3.3
cipher suites
Unknown value 0xcc14
Unknown value 0xcc13
Unknown value 0xc02b
Unknown value 0xc02f
Unknown value 0x9e
Unknown value 0xc00a
Unknown value 0xc009
Unknown value 0xc013
Unknown value 0xc014
Unknown value 0xc007
Unknown value 0xc011
TLS_DHE_RSA_WITH_AES_128_CBC_SHA
TLS_DHE_DSS_WITH_AES_128_CBC_SHA
TLS_DHE_RSA_WITH_AES_256_CBC_SHA
Unknown value 0x9c
TLS_RSA_WITH_AES_128_CBC_SHA
TLS_RSA_WITH_AES_256_CBC_SHA
TLS_RSA_WITH_3DES_EDE_CBC_SHA
TLS_RSA_WITH_RC4_128_SHA
TLS_RSA_WITH_RC4_128_MD5
Depending on the Firefox or Chrome version you might get a different list, the SSL support in the browsers evolved. In fact, even the Chromium-based Opera and YandexBrowser tweak the SSL support and send a distinctly different list. Note “Version 3.3” above, this means TLS 1.2 support (“Version 3.0” is SSLv3 and “Version 3.1” stands for TLS 1.0). For comparison, here is the output for one of the proxies:
SSLv2 compatible client hello
Version 3.1
cipher suites
TLS_DHE_RSA_WITH_AES_256_CBC_SHA
TLS_DHE_DSS_WITH_AES_256_CBC_SHA
TLS_RSA_WITH_AES_256_CBC_SHA
TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA
TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA
TLS_RSA_WITH_3DES_EDE_CBC_SHA
SSL2_CK_3DES
TLS_DHE_RSA_WITH_AES_128_CBC_SHA
TLS_DHE_DSS_WITH_AES_128_CBC_SHA
TLS_RSA_WITH_AES_128_CBC_SHA
TLS_RSA_WITH_IDEA_CBC_SHA
SSL2_CK_IDEA
SSL2_CK_RC2
TLS_RSA_WITH_RC4_128_SHA
TLS_RSA_WITH_RC4_128_MD5
SSL2_CK_RC4
TLS_DHE_RSA_WITH_DES_CBC_SHA
TLS_DHE_DSS_WITH_DES_CBC_SHA
TLS_RSA_WITH_DES_CBC_SHA
SSL2_CK_DES
TLS_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA
TLS_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA
TLS_RSA_EXPORT_WITH_DES40_CBC_SHA
TLS_RSA_EXPORT_WITH_RC2_CBC_40_MD5
SSL2_CK_RC2_EXPORT40
TLS_RSA_EXPORT_WITH_RC4_40_MD5
SSL2_CK_RC4_EXPORT40
Unknown value 0xff
This particular proxy is still compatible to SSLv2 which is no longer considered secure. In fact, Mozilla disabled SSLv2 support staring with Firefox 8 which was released in November 2011. Other browser vendors did it years ago as well.
https://palant.de/2014/05/27/proxies-breaking-up-ssl-connection-yes-all-the-time
|
Sta's Malolepszy: Pseudolocales for Firefox OS |
With bug 900182 fixed, it is now possible to enable pseudolocales in
developer builds of Firefox OS. Pseudolocales are based on US English and are
automatically generated according to strategies defined in build/l10n.js
.
Presently, two pseudolocales can be enabled in custom Gaia builds:
|
Jennie Rose Halperin: Resource list for libraries and coding |
I’ve been working on this for a little while and wanted to invite all librarians, tumblrians, archivists, and others to contribute!
https://librarianistas.etherpad.mozilla.org/librarians-and-coding (pw: library-code) feel free to hack on it! Comment away if you have any questions.
http://jennierosehalperin.me/resource-list-for-libraries-and-coding/
|
Jared Wein: New in Firefox Nightly: Experimenting with context menus |
Starting today, users of Firefox Nightly will see a new look to the classic context menu.
Context menus on desktop browsers have changed very little since Firefox 1.0 was introduced. Meanwhile, new devices have brought new concepts to context menus. The context menu on Firefox for Android is much more graphical, showing recognizable symbols at a glance.
Switching frequently used menuitems to their iconic forms can improve the usability of the menu, as it can make it easier to find the menuitems at a glance as well as click on. One way to visualize the difference is by performing what is known as a “squint test”. The image on the left is the old Firefox context menu, and the image on the right is the new Firefox context menu.
Squint test of old (left) vs. new (right) context menu (Gaussian blur=3)
Looking at the squint test above, not only is it easier to see the actions of the buttons at the top, but we can also see that the new menu feels a bit leaner.
We don’t have plans to switch all menuitems over to their iconic forms, mainly because many menuitems lack a well-understood graphical metaphor. We’ll keep experimenting with our context menus, hopefully adding the ability to customize them just like the rest of Firefox.
Known issues: The context menus found in today’s Firefox Nightly are still missing a couple finishing touches that we are going to follow up with:
http://msujaws.wordpress.com/2014/05/27/experimenting-with-context-menus/
|
Sylvestre Ledru: Changes Firefox 30 beta6 to beta7 |
Extension | Occurrences |
js | 10 |
jsm | 7 |
java | 7 |
cpp | 7 |
html | 6 |
h | 4 |
xml | 3 |
ini | 2 |
sh | 1 |
json | 1 |
css | 1 |
Module | Occurrences |
dom | 11 |
toolkit | 10 |
mobile | 10 |
layout | 5 |
browser | 4 |
js | 3 |
content | 2 |
services | 1 |
hal | 1 |
build | 1 |
List of changesets:
Ben Turner | Bug 1003766, StopSyncLoopRunnable::Cancel should call base class Cancel. r=mrbkap, a=lsblakk. - 7d980d9af355 |
Boris Zbarsky | Bug 976920 - Mostly back out Bug 932322 for now; only define the unforgeable properties on the window object itself. r=jst, a=lsblakk - 1c26f6798184 |
Drew Willcoxon | Bug 998303 - browser/base/content/test/general/browser_urlbar_search_healthreport.js attempts to connect to www.google.com. r=mak, a=test-only - 2b437d292f56 |
Nathan Froyd | Bug 1010322 - Change toolkit/mozapps/extensions/test/browser/ tests to point at actual http servers. r=jmaher, a=test-only - 8438a548150d |
Ben Turner | Bug 999274 - Wait for the last runnable before calling ShutdownScriptLoader. r=sicking, a=abillings - 50428e91f0bc |
Daniel Holbert | Bug 1000185 - Part 1: Perform synchronous SMIL sample after registering with refresh driver, not before, for consistency. r=birtles, a=abillings - 542f83ec6345 |
Daniel Holbert | Bug 1000185 - Part 2: Add a bool to keep track of whether nsSMILAnimationController instances are registered with a refresh driver. r=birtles, a=abillings - cb78c3777143 |
Marco Bonardo | Bug 992901 - Not all bookmarks are saved in the backup JSON or HTML. r=mano a=sylvestre - 3fb029a11c05 |
Jim Chen | Bug 993261 - Remove legacy code for redirecting key events to URL bar. r=lucasr, a=sledru - 869aefb78e22 |
Alexandre Lissy | Bug 1000337 - Make NotificationStorage cache origin-aware. r=mhenretty, a=abillings - d520b0344613 |
D~ao Gottwald | Bug 987859 - TabsToolbar margin needs to be dropped when entering fullscreen mode rather than when the sizemode attribute changes, which is too late. r=gijs, a=sledru - 2925e9a0a33d |
Benjamin Smedberg | Bug 959356 - Fix the spelling and type of the isWow64 measurement. r=rnewman, a=sledru - ff1925aa6f85 |
Marco Bonardo | Bug 997030 - Don't encodeURI twice in bookmarks.html. r=mano a=sylvestre - 05baa07365d9 |
Lukas Blakk | updating EARLY_BETA_OR_EARLIER a=release-mgmt - 77795a696555 |
Matteo Ferretti | Bug 980714 - Remove blinking caret in panel text. r=gozala, a=lsblakk - 4363817b56ca |
Randell Jesup | Bug 921622 - AudioStream rework. r=padenot, a=lsblakk - eaa2b716ce89 |
Ryan VanderMeulen | Backed out changeset eaa2b716ce89 (Bug 921622) for mochitest-1 asserts. - b7913b826440 |
Blair McBride | Bug 1012526 - UITour.jsm only registers with UITelemetry when it's lazily imported on-demand. r=mconley a=lsblakk - b19932497b46 |
Randell Jesup | Bug 921622 - AudioStream rework. r=padenot, a=lsblakk - 3090db8c413f |
Mats Palmgren | Bug 1007065 - Don't apply the special -moz-hidden-unscrollable clipping on nsTextControlFrame since it always has an anonymous scroll frame that deals with overflow. r=roc, a=lsblakk - 36df173cb6a2 |
Margaret Leibovic | Bug 1009473 - Remove padding around list item images. r=lucasr, a=lsblakk - 841a1b085b5b |
Margaret Leibovic | Bug 1009473 - Use lighter gray for item descriptions. r=lucasr, a=lsblakk - 82c33d14844a |
Asaf Romano | Bug 1003839 - Live bookmark is still created despite cancelling the subscription. r=mak, a=lsblakk - 9b9c4281ccb2 |
Nicolas B. Pierron | Bug 1013922 - Avoid flattenning strings after each concatenation. r=jorendorff, a=lsblakk - af6bb6bacb0e |
Myk Melez | Bug 991394 - Actually rename profiles.ini sections when removing profile. r=mfinkle, a=lsblakk - eac674ed7cfe |
Ted Mielczarek | Bug 1011859 - Order gamepad axes properly. r=jimm, a=sledru - 0bcc74404878 |
Benjamin Bouvier | Bug 1010747 - Part 1: Cleanups and factor out float32 specialization for unary instructions returning int32. r=jandem, a=sledru - 586ed41fa2d1 |
Benjamin Bouvier | Bug 1010747 - Part 2: Implement Ceil (floating-point) -> int32 in Ion. r=sunfish, r=mjrosenb, a=sledru - 80950d72bd71 |
Benjamin Bouvier | Bug 1010747 - Part 3: Factor out floating-point conversion to int32 and bailout code. r=sunfish, a=sledru - 0db12290df12 |
Ryan VanderMeulen | Backed out changesets 0db12290df12, 80950d72bd71, and 586ed41fa2d1 (Bug 1010747) for landing without approval. - a89aa1e3e367 |
Benjamin Bouvier | Bug 1010747 - Don't inline Ceil when input is a FP value and output is an Int32. r=jandem, a=sledru - d3f2e54cf39c |
Previous changelogs:
Original post blogged on b2evolution.
http://sylvestre.ledru.info/blog/2014/05/27/changes-firefox-30-beta6-to-beta7
|
Byron Jones: happy bmo push day! |
the following changes have been pushed to bugzilla.mozilla.org:
discuss these changes on mozilla.tools.bmo.
http://globau.wordpress.com/2014/05/27/happy-bmo-push-day-95/
|
Mitchell Baker: Panel on Internet Governance Mechanisms |
Internet Governance has become a much more active topic of discussion recently, as I described a bit in this previous post on NetMundial. As part of the Internet Governance activities of the last 8 or 9 months, I have participated as a member of the Panel on Global Internet Cooperation and Governance Mechanisms. This panel was convened in late 2013 by ICANN and the World Economic Forum (WEF), chaired by President Toomas Ilves of Estonia, vice-chaired by Vint Cerf, and infused by Fadi Chehad'e, the president of ICANN, and supported by the Annenberg Retreat at Sunnylands. The Panel has just completed its work, releasing its final report on how to evolve the Internet Governance ecosystem. The press release is here. There are also comments by some of the Panelists, which came out of interviews done with the Panelists at the various panel meetings (the questions have been edited out of the final versions). In some cases, including mine, the comments cover a range of topics broader than the Panel.
In this post I’m going to set out a couple thoughts about the process. It was a first for me to participate in a panel like this chaired by a sitting Head of State. This reflects the very high degree of commitment to an open internet by President Ilves, which is exciting to see in action. The report adopts the principles set out in NetMundial whole-heartedly. This was interesting, it is an example of people evaluating a ton of our own work and effort and pride of ownership and happily setting it aside to participate in something bigger. The principles the Panel had put together were very similar to those of the NetMundial document. Not exactly the same, but Panel members clearly saw our core outlook and goals reflected in the NetMundial principles. And so we adopted them as our own, and view this as a huge success. I am proud of how we did this. It’s so easy to grow attached to one’s own work for its own sake. It’s easy to convince oneself that one’s own words or style or approach or work is subtly better, and should be maintained and promoted. It’s *so* easy to create fragmentation when unity is key.
I also am drawn to this paragraph in the preamble: “The Panel’s report is based on rough consensus. The views represented in this report do not necessarily reflect the views of the conveners or of all individual Panel members.” I know the latter sentence is not so uncommon. It’s not that rare to have people decide that the overall result is good and they are willing to attach their names to it, even if some part is imperfect or weak. So there’s not much new there. What struck me is the phrase “based on rough consensus.” A number of Panel members are very deeply involved in core Internet operations and protocols: the Panel included the head of ICANN, the head of the Internet Society (home of IETF), and of course Vint Cerf. A number of the rest of us have been almost that deep in Internet technology for a good while. So the idea of “rough consensus and running code” is deep into the mindset of many Panelists. I can’t speak for others of course but I saw a bunch of the traits that are important in a rough consensus in action here. We started with a bunch of diverse views on the panel, combined with a determination to move forward. We identified key areas of fundamental agreement.
– Multi-stakeholderism.
– Distributed governance, not a single centralized authority.
– Solutions and organizations that develop organically by people working on the problem.
– Building knowledge and capacity so that more people can participate knowledgeably.
“Rough consensus and running code” isn’t a panacea of course. Like almost every approach, there are plenty of ways in which it can be corrupted and derailed. When it does work however, it’s extremely powerful. It gets people aligned, moving in the same direction with the same general principles to a shared goal, and empowers people to go make things happen.
The final report itself sets out suggestion for building on the work of NetMundial. Some people undoubtedly want the final report to go much further in proposing solutions, and processes to follow. Others were very focused on this document as a second stage building on NetMundial, and encourage more to come. I personally am of two minds. On the one hand I’d love to be able to point to concrete processes or distributed governance mechanisms and say “see what we’ve down and how much more we can do.” On the other hand, one big part of the goal is to describe the multistakeholder, distributed governance approach to citizens and governments and policy makers who are relatively new to Internet Governance. There is very explicit mention in the report’s Introduction. Given this, pointing to new solutions might not be the right approach. Pointing the direction and figuring out new solutions together might be a much more long-lasting approach.
In any case, I am eager to see multi-stakeholder Internet Governance strengthened. And deeply interested in building a governance system where citizens and civil society organizations are valued participants and leaders.
https://blog.lizardwrangler.com/2014/05/26/panel-on-internet-governance-mechanisms/
|
Joshua Cranmer: Why email is hard, part 6: today's email security |
Email security is a rather wide-ranging topic, and one that I've wanted to cover for some time, well before several recent events that have made it come up in the wider public knowledge. There is no way I can hope to cover it in a single post (I think it would outpace even the length of my internationalization discussion), and there are definitely parts for which I am underqualified, as I am by no means an expert in cryptography. Instead, I will be discussing this over the course of several posts of which this is but the first; to ease up on the amount of background explanation, I will assume passing familiarity with cryptographic concepts like public keys, hash functions, as well as knowing what SSL and SSH are (though not necessarily how they work). If you don't have that knowledge, ask Wikipedia.
Before discussing how email security works, it is first necessary to ask what email security actually means. Unfortunately, the layman's interpretation is likely going to differ from the actual precise definition. Security is often treated by laymen as a boolean interpretation: something is either secure or insecure. The most prevalent model of security to people is SSL connections: these allow the establishment of a communication channel whose contents are secret to outside observers while also guaranteeing to the client the authenticity of the server. The server often then gets authenticity of the client via a more normal authentication scheme (i.e., the client sends a username and password). Thus there is, at the end, a channel that has both secrecy and authenticity [1]: channels with both of these are considered secure and channels without these are considered insecure [2].
In email, the situation becomes more difficult. Whereas an SSL connection is between a client and a server, the architecture of email is such that email providers must be considered as distinct entities from end users. In addition, messages can be sent from one person to multiple parties. Thus secure email is a more complex undertaking than just porting relevant details of SSL. There are two major cryptographic implementations of secure email [3]: S/MIME and PGP. In terms of implementation, they are basically the same [4], although PGP has an extra mode which wraps general ASCII (known as "ASCII-armor"), which I have been led to believe is less recommended these days. Since I know the S/MIME specifications better, I'll refer specifically to how S/MIME works.
S/MIME defines two main MIME types: multipart/signed, which contains the message text as a subpart followed by data indicating the cryptographic signature, and application/pkcs7-mime, which contains an encrypted MIME part. The important things to note about this delineation are that only the body data is encrypted [5], that it's theoretically possible to encrypt only part of a message's body, and that the signing and encryption constitute different steps. These factors combine to make for a potentially infuriating UI setup.
How does S/MIME tackle the challenges of encrypting email? First, rather than encrypting using recipients' public keys, the message is encrypted with a symmetric key. This symmetric key is then encrypted with each of the recipients' keys and then attached to the message. Second, by only signing or encrypting the body of the message, the transit headers are kept intact for the mail system to retain its ability to route, process, and deliver the message. The body is supposed to be prepared in the "safest" form before transit to avoid intermediate routers munging the contents. Finally, to actually ascertain what the recipients' public keys are, clients typically passively pull the information from signed emails. LDAP, unsurprisingly, contains an entry for a user's public key certificate, which could be useful in large enterprise deployments. There is also work ongoing right now to publish keys via DNS and DANE.
I mentioned before that S/MIME's use can present some interesting UI design decisions. I ended up actually testing some common email clients on how they handled S/MIME messages: Thunderbird, Apple Mail, Outlook [6], and Evolution. In my attempts to create a surreptitious signed part to confuse the UI, Outlook decided that the message had no body at all, and Thunderbird decided to ignore all indication of the existence of said part. Apple Mail managed to claim the message was signed in one of these scenarios, and Evolution took the cake by always agreeing that the message was signed [7]. It didn't even bother questioning the signature if the certificate's identity disagreed with the easily-spoofable From address. I was actually surprised by how well people did in my tests—I expected far more confusion among clients, particularly since the will to maintain S/MIME has clearly been relatively low, judging by poor support for "new" features such as triple-wrapping or header protection.
Another fault of S/MIME's design is that it makes the mistaken belief that composing a signing step and an encryption step is equivalent in strength to a simultaneous sign-and-encrypt. Another page describes this in far better detail than I have room to; note that this flaw is fixed via triple-wrapping (which has relatively poor support). This creates yet more UI burden into how to adequately describe in UI all the various minutiae in differing security guarantees. Considering that users already have a hard time even understanding that just because a message says it's from example@isp.invalid doesn't actually mean it's from example@isp.invalid, trying to develop UI that both adequately expresses the security issues and is understandable to end-users is an extreme challenge.
What we have in S/MIME (and PGP) is a system that allows for strong guarantees, if certain conditions are met, yet is also vulnerable to breaches of security if the message handling subsystems are poorly designed. Hopefully this is a sufficient guide to the technical impacts of secure email in the email world. My next post will discuss the most critical component of secure email: the trust model. After that, I will discuss why secure email has seen poor uptake and other relevant concerns on the future of email security.
[1] This is a bit of a lie: a channel that does secrecy and authentication at different times isn't as secure as one that does them at the same time.
[2] It is worth noting that authenticity is, in many respects, necessary to achieve secrecy.
[3] This, too, is a bit of a lie. More on this in a subsequent post.
[4] I'm very aware that S/MIME and PGP use radically different trust models. Trust models will be covered later.
[5] S/MIME 3.0 did add a provision stating that if the signed/encrypted part is a message/rfc822 part, the headers of that part should override the outer message's headers. However, I am not aware of a major email client that actually handles these kind of messages gracefully.
[6] Actually, I tested Windows Live Mail instead of Outlook, but given the presence of an official MIME-to-Microsoft's-internal-message-format document which seems to agree with what Windows Live Mail was doing, I figure their output would be identical.
[7] On a more careful examination after the fact, it appears that Evolution may have tried to indicate signedness on a part-by-part basis, but the UI was sufficiently confusing that ordinary users are going to be easily confused.
http://quetzalcoatal.blogspot.com/2014/05/why-email-is-hard-part-6-todays-email.html
|