-Поиск по дневнику

Поиск сообщений в rss_planet_mozilla

 -Подписка по e-mail

 

 -Постоянные читатели

 -Статистика

Статистика LiveInternet.ru: показано количество хитов и посетителей
Создан: 19.06.2007
Записей:
Комментариев:
Написано: 7

Planet Mozilla





Planet Mozilla - https://planet.mozilla.org/


Добавить любой RSS - источник (включая журнал LiveJournal) в свою ленту друзей вы можете на странице синдикации.

Исходная информация - http://planet.mozilla.org/.
Данный дневник сформирован из открытого RSS-источника по адресу http://planet.mozilla.org/rss20.xml, и дополняется в соответствии с дополнением данного источника. Он может не соответствовать содержимому оригинальной страницы. Трансляция создана автоматически по запросу читателей этой RSS ленты.
По всем вопросам о работе данного сервиса обращаться со страницы контактной информации.

[Обновить трансляцию]

Adam Lofting: Getting Bicho Running as a process on Heroku with a Scheduler

Четверг, 29 Мая 2014 г. 15:49 + в цитатник

By F'elicien Victor Joseph Rops (Belgium, Namur, 1833-1898) [Public domain], via Wikimedia Commons

“Ou la lecture du grimoire”

For our almost complete MoFo Interim Dashboard, I’m planning to use an issue tracker parsing tool called Bicho to work out how many people are involved in the Webmaker project in Bugzilla. Bicho is part of a suite of tools called Metrics Grimoire which I’ll explore in more detail in near future. When combined with vizGrimoire, you can generate interesting things like this which are very closely related to (but not exactly solving the same challenge) as our own contribution tracking efforts.

I recently installed a local copy of Bicho, and ran this against some products on Bugzilla to test it out. It generates a nicely structured relational database including the things I want to count and feed into our contributor numbers.

This morning I got this running on Heroku, which means it can run periodically and update a hosted DB, which can then feed numbers into our dashboard.

This was a bit trial and error for me as all the work I’ve done with Python was within Google App Engine’s setup, and my use of Heroku has been for Node apps, so these notes are to help me out some time in the future when I look back to this.

Getting this working on Heroku

$ pip freeze

generates a list of the requirements from your working localenv e.g.

BeautifulSoup==3.2.1
MySQL-python==1.2.5
feedparser==5.1.3
python-dateutil==2.2
six==1.6.1
storm==0.20
wsgiref==0.1.2

Copy this into a requirements.txt file in the root of your project

But remove the line: Bicho==0.9 (or it tries to install this via pip, which fails)

Heroku’s notes on specifying dependencies.

You can now push this to Heroku.

Then, I ran:

$ heroku run python setup.py

But I’m actually not sure if that was required.

Then you can run Bicho remotely via heroku run commands

$ heroku run python bin/bicho --db-user-out=yourdbusername --db-password-out=yourdbuserpassword --db-database-out=yourdbdatabase --db-hostname-out=yourdbhostname -d 5 -b bg --backend-user 'abugzilla@exampleuser.com' --backend-password 'bugzillapasswordexample' -u 'https://bugzillaurl.com?etc'

As a general precaution for anything like this, don’t use a user account that has any special privileges. I create duplicate logins that have the same level of access available to any member of the public.

Once you’ve got a command that works here, cancel the running script as it might have thousands of issues left to process.

Then setup a scheduler https://devcenter.heroku.com/articles/scheduler

$ heroku addons:add scheduler:standard
$ heroku addons:open scheduler

copy your working command into the scheduler just without the ‘heroku run’ part

python bin/bicho --db-user-out=yourdbusername --db-password-out=yourdbuserpassword --db-database-out=yourdbdatabase --db-hostname-out=yourdbhostname -d 5 -b bg --backend-user 'abugzilla@exampleuser.com' --backend-password 'bugzillapasswordexample' -u 'https://bugzillaurl.com?etc'

If you set this to run every 10 mins, the process will cycle and get killed periodically but in the logs this usefully shows you how the import is progressing.

I’m generally happy with this as a solution for counting contributors in Webmaker’s issue tracking history, but would need to work on some speed issues if this was of interest across Mozilla projects.

Currently, this is importing about 400 issues an hour, which would be problematic to process 1,000,000+ bugs in bugzilla.mozilla.org. But that’s not a problem to solve right now. And not necessarily the way you’d want to do that either.

http://feedproxy.google.com/~r/adamlofting/blog/~3/tCFTQVu3SZs/


Asa Dotzler: Flame – Firefox OS Reference Phone

Четверг, 29 Мая 2014 г. 01:00 + в цитатник

Attending the Mozilla Summit 2013 and talking with our community about the exciting future of Firefox OS, the one concern I heard voiced most often was the difficulty of participating in Firefox OS. It was simply out of reach to most people I talked to.

No doubt, Firefox OS is an open source project. The code’s been there since before day 1. But access to source code does not a successful open source project make. Firefox OS felt to many I’ve spoken with to be far less participatory than Firefox the browser. The primary reasons for this, IMO, are the lack of widely available Firefox OS hardware and regular Firefox OS testing binaries.

With Firefox the browser, anyone anywhere in the world can download and get updates for a Mozilla-hosted binary of Firefox on a daily basis. They can download the active development “Nightly” build to see the changes that landed in the browser since yesterday. They can also download and get updates for the more stable “Aurora” and “Beta” channels. And because Firefox runs on Mac, Windows, and Linux, most people had no problem trying it out on their existing computers.

Firefox the browser is easy to try out. Firefox OS is not.

With Firefox OS, only Mozilla employees or employees of Mozilla partners can download development builds and get updates. This is because there are a few pieces down in the phone’s software stack that are not Mozilla’s code and for which Mozilla doesn’t have license to distribute. And, even if our community could download development builds of Firefox OS, the hardware to put that on has been very limited. If you’re not in a region with a Mozilla partner shipping Firefox OS phones, your options were very limited.

Mozilla tried to hack around the problems a couple of times over the last year, with limited success.

Flame is Mozilla’s investment in solving those problems for real.

Mozilla partnered with a company called Thundersoft to design, build, and update a phone that contains all of the hardware we’re targeting in the next year or so. Thundersoft has already made and delivered 2,000 Flame phones to Mozilla and we are rolling those out now to the core Firefox OS teams and community. We’ve got the retail site up which is selling an additional 5,000 Flame phones, and we’re hard at work on making nightly builds available to these devices.

The Flame phone won’t fix everything, but it should go a long way towards empowering our community to participate in Firefox OS and to generate the kinds of community impact on scale that we’ve seen in our Firefox browser community for years.

http://asadotzler.com/2014/05/28/flame-firefox-os-reference-phone/


Armen Zambrano: How to create local buildbot slaves

Среда, 28 Мая 2014 г. 23:05 + в цитатник

For the longest time I have wished for *some* documentation on how to setup a buildbot slave outside of the Release Engineering setup and not needing to go through the Puppet manifests.

On a previous post, I've documented how to setup a production buildbot master.
In this post, I'm only covering the slaves side of the setup.

Install buildslave

virtualenv ~/venvs/buildbot-slave
source ~/venvs/buildbot-slave/bin/activate
pip install zope.interface==3.6.1
pip install buildbot-slave==0.8.4-pre-moz2 --find-links http://pypi.pub.build.mozilla.org/pub
pip install Twisted==10.2.0
pip install simplejson==2.1.3
NOTE: You can figure out what to install by looking in here: http://hg.mozilla.org/build/puppet/file/ad32888ce123/modules/buildslave/manifests/install/version.pp#l19

Create the slaves

NOTE: I already have build and test master in my localhost with ports 9000 and 9001 respecively.
buildslave create-slave /builds/build_slave localhost:9000 bld-linux64-ix-060 pass
buildslave create-slave /builds/test_slave localhost:9001 tst-linux64-ec2-001 pass

Start the slaves

On a normal day, you can do this to start your slaves up:
 source ~/venvs/buildbot-slave/bin/activate
 buildslave start /builds/build_slave
 buildslave start /builds/test_slave


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

http://feedproxy.google.com/~r/armenzg_mozilla/~3/6KVJH4eDwFI/how-to-create-local-buildbot-slaves.html


Marco Bonardo: Bookmarks backups respin

Среда, 28 Мая 2014 г. 21:11 + в цитатник

Part of the performance improvements we planned for Places, the history, bookmarking and tagging subsystem of Firefox, involved changes to the way we generate bookmarks backups.

As you may know, Firefox stores almost everyday a backup of your bookmarks into the profile folder, as a JSON file. The process, so far, had various issues:

  1. it was completely synchronous, I/O on main-thread is evil
  2. it was also doing a lot more I/O than needed
  3. it was using expensive live-updating Places results, instead of a static snapshot
  4. backups take up quite some space in the profile folder
  5. We were creating useless duplicate backups
  6. We were slowing down Firefox shutdown
  7. the code was old and ugly

The first step was to reorganize the code and APIs, Raymond Lee and Andres Hernandez took care of most of this part. Most of the existing code was converted to use the new async tools (like Task, Promises and Sqlite.jsm) and old synchronous APIs were deprecated with loud console warnings.

The second step was dedicated to some user-facing improvements. We must ensure the user can revert to the best possible status, but it was basically impossibile to distinguish backups created before a corruption from the ones created after it, so we added the size and bookmarks count to each backup. They are now visible in the Library / Restore menu.

Once we had a complete async API exposed and all of the internal consumers were converted to it, we could start rewriting the internals. We replaced the expensive Places result with a direct async SQL query reading the whole bookmarks tree at once. Raymond started working on this, I completed and landed it and, a few days ago, Asaf Romano further improved this new API. It is much faster than before (initial measurements have shown a 10x improvement) and it's also off the main-thread.

Along the process I also removed the backups-on-shutdown code, in favor of an idle-only behavior. Before this change we were trying to backup on an idle of 15 minutes, if we could not find a long enough interval we were enforcing a backup on shutdown. This means, in some cases, we were delaying the browser shutdown by seconds. Currently we look for an 8 minutes idle, if after 3 days we could not find a long enough interval, we cut the idle interval to 4 minutes (note: we started from a larger time and just recently tweaked it based on telemetry data).

At that point we had an async backups system, a little bit more user-friendly and doing less I/O. But we still had some issues to resolve.

First, we added an md5 hash to each backup, representing its contents, so we can avoid replacing a still valid backup, thus providing more meaningful backups to the user and reducing I/O considerably.

Then the only remaining piece of work was to reduce the footprint of backups in the profile folder, both for space and I/O reasons. Luckily we have an awesome community! Althaf Hameez, a community member, volunteered to help us completing this work. Bug 818587 landed recently, providing lz4-like compression to the backups: automatic backups are compressed and have .jsonlz4 extension, while manual backups are still plain-text, to allow sharing them easily with third party software or previous versions.

Finally, I want to note that we are now re-using most of the changed code also for bookmarks.html files. While these are no more our main exporting format, we still support them for default bookmarks import and bookmarks exchange with third party services or other browsers. So we obtained the same nice perf improvements for them.

Apart from some minor regressions, that are currently being worked on by Althaf himself, who kindly accepted to help us further, we are at the end of this long trip. If you want to read some more gory details about the path that brought us here, you can sneak into the dependency tree of bug 818399. If you find any bugs related to bookmark backups, please file a bug in Toolkit / Places.

http://blog.bonardo.net/2014/05/28/bookmarks-backups-redesign


Pete Moore: Weekly review 2014-05-28

Среда, 28 Мая 2014 г. 18:31 + в цитатник

Have been working on:

Bug 1013961 – Get catlee’s cool fabric actions in production for e.g. managing pulse queues on buildbot masters

Bug 1013885 – inband1.r202-4.console.scl3.mozilla.net is DOWN :PING CRITICAL - Packet loss = 100%

Bug 1011958 – ICS build bustage in out/host/linux-x86/obj/EXECUTABLES/triangleCM_intermediates/triangleCM

Bug 1010173 – test root internal variable on devices (SUTAgentAndroid.sTestRoot) should not be set as an error message

Bug 1009880 – (too-many-builders) linux64 test master reconfigs are extremely slow and masters stop accepting now jobs mid-reconfig

Bug 1004570 – Jacuzzi for Update Verify, so we can use a persistent cache across jobs

Bug 994905 – tst-linux64-ec2-dminor problem tracking

Bug 978928 – Reconfigs should be automatic, and scheduled via a cron job

Bug 947202 – (bld-lion-r5-086) bld-lion-r5-086 problem tracking

Bug 937732 – Tracker bug: HG local disk migration

Bug 913870 – Intermittent panda “Dying due to failing verification”

Bug 910745 – Third party repositories listed in b2g-manifest should always reference a tag/revision

Bug 905742 – Provide B2G Emulator builds for Darwin x86

Bug 847640 – db-based mapper on web cluster

Bug 825889 – (bld-lion-r5-087) bld-lion-r5-087 problem tracking

Bug 807230 – Intermittent DMError: Automation Error: Timeout in command {ls,ps,mkdr,rmdir,activity} | DMError: Automation Error: Timeout in command isdir /mnt/sdcard/tests/logs (or DMError: Automation Error: Timeout in command isdir /mnt/sdcard/tests/reftest)

Bug 803087 – (bld-centos6-hp-019) bld-centos6-hp-019 problem tracking

Bug 771560 – (tegra-073) tegra-073 problem tracking

Bug 751962 – (tegra-028) [staging] tegra-028 problem tracking

Bug 746071 – (tegra-048) tegra-048 problem tracking

http://petemoore.tumblr.com/post/87103050318


Pete Moore: Weekly review 2015-05-28

Среда, 28 Мая 2014 г. 18:29 + в цитатник

Have been working on:

Bug 1013961 – Get catlee’s cool fabric actions in production for e.g. managing pulse queues on buildbot masters

Bug 1013885 – inband1.r202-4.console.scl3.mozilla.net is DOWN :PING CRITICAL - Packet loss = 100%

Bug 1011958 – ICS build bustage in out/host/linux-x86/obj/EXECUTABLES/triangleCM_intermediates/triangleCM

Bug 1010173 – test root internal variable on devices (SUTAgentAndroid.sTestRoot) should not be set as an error message

Bug 1009880 – (too-many-builders) linux64 test master reconfigs are extremely slow and masters stop accepting now jobs mid-reconfig

Bug 1004570 – Jacuzzi for Update Verify, so we can use a persistent cache across jobs

Bug 994905 – tst-linux64-ec2-dminor problem tracking

Bug 978928 – Reconfigs should be automatic, and scheduled via a cron job

Bug 947202 – (bld-lion-r5-086) bld-lion-r5-086 problem tracking

Bug 937732 – Tracker bug: HG local disk migration

Bug 913870 – Intermittent panda “Dying due to failing verification”

Bug 910745 – Third party repositories listed in b2g-manifest should always reference a tag/revision

Bug 905742 – Provide B2G Emulator builds for Darwin x86

Bug 847640 – db-based mapper on web cluster

Bug 825889 – (bld-lion-r5-087) bld-lion-r5-087 problem tracking

Bug 807230 – Intermittent DMError: Automation Error: Timeout in command {ls,ps,mkdr,rmdir,activity} | DMError: Automation Error: Timeout in command isdir /mnt/sdcard/tests/logs (or DMError: Automation Error: Timeout in command isdir /mnt/sdcard/tests/reftest)

Bug 803087 – (bld-centos6-hp-019) bld-centos6-hp-019 problem tracking

Bug 771560 – (tegra-073) tegra-073 problem tracking

Bug 751962 – (tegra-028) [staging] tegra-028 problem tracking

Bug 746071 – (tegra-048) tegra-048 problem tracking

http://petemoore.tumblr.com/post/87102930918


Zack Weinberg: a small dispatch from the coalface

Среда, 28 Мая 2014 г. 17:34 + в цитатник
categorycount%
total5 838 383100.000
ok2 212 56537.897
ok (redirected)1 999 34134.245
network or protocol error798 23113.672
timeout412 7597.070
hostname not found166 6232.854
page not found (404/410)110 2411.888
forbidden (403)75 0541.286
service unavailable (503)18 648.319
server error (500)15 150.259
bad request (400)14 397.247
authentication required (401)9 199.158
redirection loop2 972.051
proxy error (502/504/52x)1 845.032
other HTTP response1 010.017
crawler failure329.006
syntactically invalid URL19.000

Sorry about the non-tabular figures.

https://www.owlfolio.org/research/a-small-dispatch-from-the-coalface/


Adam Lofting: Are we on track with our 2014 contributor goals?

Среда, 28 Мая 2014 г. 13:48 + в цитатник

I presented a version of this on the Mozilla Foundation staff call yesterday, and thought it’s worth a write-up for those who weren’t on the call and those in MoCo working on related things.

Some Context:

One of the cross Mozilla goals this year is to “10X” the number of active contributors, with a longer term goal of growing to a million Mozillians.

When the 10X goal was set we weren’t really sure what X was, for valid reasons; defining contribution is as much art as it is science, and the work plans for this year include building the tools to measure this in a systematic way. The goals justify the tools, and vice versa. Chicken and egg.

2,000 contributors were invited to the summit, so the target was set at 20k active contributors shared between MoCo and MoFo. MoFo have been working to a target of 10k contributors but in practice this isn’t going to be a clean 50/50 split and there will be overlap in contributors across teams, projects and MoFo/MoCo. For example, 10k MoCo contributors + 10k MoFo contributors could = 19k Mozilla contributors.

When I joined in January, each of the MoFo teams did some (slightly tedious) manual counting and estimated their contributor numbers for 2013, and we added these up to a theoretical 5,600 contributors. This was our baseline number. Useful to an order or magnitude, but not precise.

This 5,600 number suggests that 10k contributors was quite far off 10X contributors based on these January estimates, but 10k is still going to be a challenging goal. At 10X we’d have been aiming for 50k+ contributors.

From the data that’s emerging, 10k active contributors to MoFo feels like a sane but stretching target.

With the recent forming of the Badge Alliance, some MoFo goals are now Badge Alliance goals, and the same goes for counting people contributing to parts of the Open Badge ecosystem. As a result, our theoretical 5,600 contributor number got smaller. It’s now 4,200.

So 4,200 is where we assumed we started this year, but we haven’t proved this yet. And realizing this measurement has been our priority metrics project this year.

How are we doing so far?

We’ve been automating ways to count these ‘theoretical’ contributors, and feeding them into our dashboard.

But to-date as we’ve looked at the dashboard, and the provable number was 1,000 or 2,000 or so, we would then say “but the real number is actually closer to 5,000''. Which means the dashboard hasn’t been very useful yet, as the theoretical number always trumped the provable but incomplete number.

This will change in the next few weeks.

We’re now nearly counting ‘live’, all of those theoretical pots of contribution.

And the dashboard is at 2,800.

Once we add the Webmaker mentors who complete their training this year, and anything else that goes into the ad-hoc contribution logger, we’re basically at our real comparison point to that theoretical number, and we can drop the ‘theoretical’ bit.

If there’s a thousand mentors and another four hundred contributors added to the ad-hoc logger, our theoretical estimate will be remarkably close to reality. Except, that it’s six months behind where we thought it would be.

We’re getting close to that 4,200, but we expected (albeit pretty loosely) to be there in January.

This either means that:

(A) the growth shown on the graph to-date is an artifact of missing historical data, and we’re actually trending pretty flat.

(B) our 2013 estimates were too high and we started this year with fewer contributors than we thought, but we’ve been growing to date.

As we don’t have time-stamped historical data for some of these things, we’re not going to know which for sure. But either way, we now need to increase the rate at which we on-board new contributors to hit 10k by the end of the year.

There are plans in place for growing contribution numbers, but this is going to be active work.

Whether that’s converting new webmaker users who join us through Maker Party, or reducing barriers to contributing code or, actively  going out and asking people if they want to contribute. Growing that contributor number is going to be a combination of good processes and marketing.

Also to note

I’ll be making this MoFo total number smaller by X% when we integrate the data into a single location and de-dupe people across these  activities. But we don’t know what X% is yet. That’s just something to be aware of.

In relation to the points on there not being a clear MoCo/MoFo split in where people contribute, we’re much more directly connecting up the systems and processes now. We’ll have more to share on this in the coming weeks.

Tracking the status of the dashboard

http://feedproxy.google.com/~r/adamlofting/blog/~3/lgHY5osdpTw/


Alex Vincent: Compacting XUL interfaces?

Среда, 28 Мая 2014 г. 12:43 + в цитатник

For my day job, I work at a startup, basically as an expert in Mozilla technologies.  I love what I do, too.  But whenever I do user-interface work, I frequently run into a simple problem:  screen real estate.

Case in point, my latest work with XML entities and entity references on the Verbosio XML editor project.  (I can’t really talk about details of my work-related code here, but I can talk about the pet project.)  The demo panel for this, which doubles as a laboratory for my experiments, has four major sections.  The upper left corner holds a CodeMirror instance for one DTD document.  The upper right corner holds another CodeMirror instance, for a second DTD.  The middle horizontal section holds a third CodeMirror instance, for the XML document that might load the DTD documents.  Each of these has some ordinary XUL buttons on the right edge and a menulist, to help me control the individual sections.  In the bottom third, I have a XUL deck which can toggle between an iframe and a XUL tree showing the document object model of the XML I’ve parsed.  To the right of this XUL tree, I plan on adding a bit more, for showing the entities defined in the doctype or the entity references on a node.

EntityParsePreview

All of this UI lives inside a tabbox.  As you can see, it’s already a little crowded, though fortunately, it’s almost complete in terms of features I need.

I can’t do a fair comparison against the Mozilla Firefox user-interface; the main windows don’t have textboxes for source code or trees for DOM views. So their controls’ relative sizes don’t come close to mine:  they’re much flatter.

The built-in developer tools, though, do have an elegance to them, and are a fair comparison.  The right side panel, showing variables and events, can collapse away (and the animation’s pretty nice, too).  The left side panel has a listbox (I think) of scripts to choose from, and when you select one (either in call stack or in sources), the equivalent source code appears in the center.  Plus, they have some really tiny icon buttons in the UI, much smaller than the standard XUL buttons I use.  The devtools UI gives you basically what you need and otherwise tries to get out of your way.

Dear lazyweb of Mozilla UI specialists:  Can you point me to a developer.mozilla.org document with some guidelines for efficiently using the screen real estate?  My user-interface works, but I need some tips & tricks for making the most of it.  It could be simple stuff like shrinking buttons, or it could be creating new XBL bindings to masterfully present common ideas together.  I’m not willing to go to HTML5's Canvas for this.  But my experience has largely been in components and JavaScript, not in crafting UI’s…

Or maybe it’s time for me to re-read Jenifer Tidwell’s excellent book from 2006, “Designing Interfaces”.   (I have the first edition.)

https://alexvincent.us/blog/?p=822


Daniel Stenberg: Crashed and recovered in no time

Среда, 28 Мая 2014 г. 02:25 + в цитатник

Working from home, even writing software from home, my computer setup is pretty crucial for a productive work day.

Yesterday morning after I had sat down with my coffee and started to work on my latest patch iteration I noticed that some disk operations seemed to be very slow. I looked around and then suddenly an ‘ls’ of a directory returned an error!

I checked the system logs and I saw them filling up with error messages identifying problems with a hard drive. Very quickly I identified the drive as the bigger one (I have one SSD and one much larger HDD). Luckily, that’s the one I mostly store document, pictures and videos on and I backup that thing every night. This disk is not very old and I’ve never experienced this sort of disk crash before, not even with disks that I’ve used for many years more than I’ve used this…

boomI ripped the thing out, booted up again and I could still work since my source code and OS are on the SSD. I ordered a new one at once. Phew.

Tuesday morning I noticed that for some unexplainable reason I had my /var partition on the dead drive (and not backed up). That turned out to be a bit inconvenient since now my Debian Linux had no idea which packages I had installed and apt-get and dpkg were all crippled to death.

I did some googling and as my laptop is also a Debian sid install I managed to restore it pretty swiftly by copying over data from there. At least it (the /var contents) is now mostly back to where it was before.

On Tuesday midday, some 26 hours after I ripped out the disk, my doorbell bing-bonged and the delivery guy handed me a box with a new and shiny 3 TB drive. A couple of hours ago I inserted it, portioned it, read back a couple of hundred gigabytes of backup, put back the backup job in cron again and … yeah, I think I’m basically back to where I was before it went south.

All in all: saved by the backup. Not many tears. Phew this time.

http://daniel.haxx.se/blog/2014/05/28/crashed-and-recovered-in-no-time/


Jess Klein: Dear Massimo

Среда, 28 Мая 2014 г. 00:50 + в цитатник
Dear Massimo,

I met you when I was 23 and working as a Curatorial Assistant at the Museum of Arts & Design. You and Lella came to work on the exhibition design for a show on the jewelry of Seaman Schepps. At the time, I had never worked with a real designer and I was in complete awe of your process. You were a great listener, and methodically took notes and drew sketches while we described to you our dreams for the exhibition. You had a mechanical pencil. You drew thick and distinctive lines, unquestionably bold.

Your notes were written in capitals, like an architect. You were stubborn and your stubbornness pushed all of us around you out of our comfort zones. I remember sitting with you and Lella, in the back of a yellow cab when we discussed the materials that we would be using to create the pedestals to display jewelry. You said "lead".  Lella said "MAAAAASSSSIMO, don't be ridiculous." You then described to us your vision of having the intricate jewelry stand out against the simplicity of the raw material. Your vision, was measured, austere and had perfect symmetry. As someone who had to work with the contractors to acquire the lead and to actually handle the lead to build the cases, I was of course completely terrified. Isn't lead toxic? Like, could this perfect design decision of Massimo's actually kill me in the long run?  We worked together and learned that wax detoxifies lead. We went to your studio (the first real designer's studio I had ever seen) and testing it out.  You had an I told you so look on your face like a devilish child. And so, we made the cases:


You loved a good fight. One of my favorite things was listening to you, Lella, Yoshi Waterhouse and Beatriz Cifuentes completely disagree about design direction. Lella was your partner and you had a beautiful banter that inspired me to be adventurous, and own my questioning nature. 

At the time I wasn't sure what I wanted to do with my life. I was working in museums but found myself sketching in the galleries and taking side jobs just to draw. You saw me at lunch one day and asked me why I was wasting my talent and then you didn't stay around to hear an answer. Lella, of course, followed up with me and told me to try things out, to take in as much art and design as I could and that things will figure themselves out. You very much had the ask for forgiveness, not permission attitude, which I still to this day, admire.

I ended up working at the Rubin Museum of Art after that museum, and found myself constantly questioning and designing and thinking about my conversations with you. Eventually I built up enough courage to quit and went to design school. I learned about why I should admire you even more and read your writing on design, watched you speak in Helvetica and looked at your wealth of work over time. But really, that's just the evidence. You showed me how to be a designer because you embodied it. You had a viewpoint, and a goal of helping people to better understand the world they inhabit, through design. I am a stylistically a different kind of designer, but you taught me to own my stubbornness and to have an opinion. 

I will always admire you. We are lucky enough to see all the design interventions that you have left the world on a daily basis.  Thank you for being bold and stubborn. 

I already miss you.

Jess

Note: I learned that Massimo Vignelli passed away today. His son, Luca asked for all those for whom Vignelli was either an influence or an inspiration to write him a letter. This is mine.

http://jessicaklein.blogspot.com/2014/05/dear-massimo.html


Planet Mozilla Interns: Mihnea Dobrescu-Balaur: Caching For the Win

Вторник, 27 Мая 2014 г. 21:14 + в цитатник

It seems like every now and then, while working on something, I get reminded that no matter how much you optimize any given process, most of the time, caching will give better results - even with no (other) optimizations.

When you think about it, it's pretty obvious - not doing any work at all is better than doing "some" work. Nevertheless, we still look out for web framework performance, NoSQL access times and whatnot. The Disqus guys have a great blog post on how they used Varnish to scale their systems to huge numbers while using a "slow" Python-based stack.

I got bit by this recently while working on a school assignment. We had to solve the Expedia ranking problem, working with some CSV data totalling about 15 million entries. To save some time by not having to parse CSV all the time, I decided to take advantage of today's technologies and use MongoDB to store the entries. Its document-oriented, schemaless approach made sense for the data, since there were a lot of missing values.

Starting to work on the problem, I had to deal with a fairly lengthy feedback loop, because, even with Mongo being webscale and all, processing took some time (I was using a laptop). To improve this, I started tweaking the interactions with the database, reduced the accessed data to the smallest subset possible and so on. I did not get very far - it was still too slow. Then, I zoomed out a bit and I realised that I've been doing it wrong from the start. I'm sure it's no surprise by now, I added a caching layer (in the form of memoization) and that yielded great results.

To sum up, when tackling a problem, don't disregard a simple solution for a shiny one. It might turn out to be just fool's gold.

http://www.mihneadb.net/post/caching-for-the-win


Wladimir Palant: Proxies breaking up SSL connections? Yes, all the time...

Вторник, 27 Мая 2014 г. 19:45 + в цитатник

Some months ago I was wondering why some Firefox installations appear to not support strong encryption. After analyzing the SSL handshakes on one of the filter download servers used by Adblock Plus, I am now mostly confident that the reason is proxy servers essentially conducting Man-in-the-Middle (MitM) attacks. Normally, a proxy server can only forward SSL data to its destination, it can neither modify nor read the data due to encryption. MitM proxies however pose as the destination server which allows them to manipulate the data in any way they like. For that they have to encrypt the communication with a certificate that is valid for the destination server, usually this happens by installing a new root certificate on the client’s computer.

I used ssldump to record 3294 SSL handshakes. Not a terribly large sample, yet it already contained lots of entries where the client’s SSL support didn’t match the user agent from the web server logs:

  • Seven corporate proxies seem to be breaking up SSL connections of the company employees.
  • A program to provide schools with Internet access apparently routes all traffic through a MitM proxy.
  • A university proxy snoops in the SSL traffic.
  • A government agency has to check all outgoing data, even when encrypted.
  • An ISP seems to route all customer traffic through a MitM proxy (not sure how they pulled it off but it’s China so everything is possible).
  • Lots of people use Google AppEngine as a MitM proxy, maybe without even knowing that.
  • Six users seem to be running MitM proxies on their computer, either intentionally or due to a malware infection.
  • Three requests associated with Android devices are strange: either the SSL support of these devices is very outdated despite Android 4.2/4.3 (even SSLv3 used in one case) or a local MitM proxy is running there.

If that sample is representative in any way, it would mean that roughly 0.5% of the internet users are behind a proxy server that will intercept their encrypted data. Why is that a problem?

  • 0.5% might not sound like much but if you apply that to the general Internet population then we are talking about millions of people. That’s millions of people who might be wrongly assuming that the lock icon next to the website address is protecting them.
  • These proxies create a single point of failure: an attacker doesn’t need to tediously infect computers one by one if he can simply control the proxy server and intercept all data there.
  • While all browsers updated their SSL support a while ago to mitigate issues like the BEAST attack, proxy servers didn’t. Only one of the proxy servers I found used TLS 1.1, the rest of them used TLS 1.0 and one (a corporate proxy protecting a large network) even SSLv3.
  • As a webmaster, you cannot just look at the browser versions of your visitors to decide what your webserver needs to support. For example, Security Labs recommends disabling SSLv3 support but it turns out that you might lock out more users than just those using Internet Explorer 6.

For reference, how did I recognize the browsers in the ssldump output? This turned out to be pretty simple, each browser has its very distinct list of supported ciphers that it sends to the server. Conveniently, that’s the very first packet that ssldump will record for a connection. For Firefox 29 it looks like this:

     ClientHello
        Version 3.3 
        cipher suites
        Unknown value 0xc02b
        Unknown value 0xc02f
        Unknown value 0xc00a
        Unknown value 0xc009
        Unknown value 0xc013
        Unknown value 0xc014
        Unknown value 0xc012
        Unknown value 0xc007
        Unknown value 0xc011
        TLS_DHE_RSA_WITH_AES_128_CBC_SHA
        TLS_DHE_DSS_WITH_AES_128_CBC_SHA
        Unknown value 0x45
        TLS_DHE_RSA_WITH_AES_256_CBC_SHA
        TLS_DHE_DSS_WITH_AES_256_CBC_SHA
        Unknown value 0x88
        TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA
        TLS_RSA_WITH_AES_128_CBC_SHA
        Unknown value 0x41
        TLS_RSA_WITH_AES_256_CBC_SHA
        Unknown value 0x84
        TLS_RSA_WITH_3DES_EDE_CBC_SHA
        TLS_RSA_WITH_RC4_128_SHA
        TLS_RSA_WITH_RC4_128_MD5

Chrome 35 sends a different list:

      ClientHello
        Version 3.3 
        cipher suites
        Unknown value 0xcc14
        Unknown value 0xcc13
        Unknown value 0xc02b
        Unknown value 0xc02f
        Unknown value 0x9e
        Unknown value 0xc00a
        Unknown value 0xc009
        Unknown value 0xc013
        Unknown value 0xc014
        Unknown value 0xc007
        Unknown value 0xc011
        TLS_DHE_RSA_WITH_AES_128_CBC_SHA
        TLS_DHE_DSS_WITH_AES_128_CBC_SHA
        TLS_DHE_RSA_WITH_AES_256_CBC_SHA
        Unknown value 0x9c
        TLS_RSA_WITH_AES_128_CBC_SHA
        TLS_RSA_WITH_AES_256_CBC_SHA
        TLS_RSA_WITH_3DES_EDE_CBC_SHA
        TLS_RSA_WITH_RC4_128_SHA
        TLS_RSA_WITH_RC4_128_MD5

Depending on the Firefox or Chrome version you might get a different list, the SSL support in the browsers evolved. In fact, even the Chromium-based Opera and YandexBrowser tweak the SSL support and send a distinctly different list. Note “Version 3.3” above, this means TLS 1.2 support (“Version 3.0” is SSLv3 and “Version 3.1” stands for TLS 1.0). For comparison, here is the output for one of the proxies:

SSLv2 compatible client hello
  Version 3.1 
  cipher suites
  TLS_DHE_RSA_WITH_AES_256_CBC_SHA  
  TLS_DHE_DSS_WITH_AES_256_CBC_SHA  
  TLS_RSA_WITH_AES_256_CBC_SHA  
  TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA  
  TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA  
  TLS_RSA_WITH_3DES_EDE_CBC_SHA  
  SSL2_CK_3DES  
  TLS_DHE_RSA_WITH_AES_128_CBC_SHA  
  TLS_DHE_DSS_WITH_AES_128_CBC_SHA  
  TLS_RSA_WITH_AES_128_CBC_SHA  
  TLS_RSA_WITH_IDEA_CBC_SHA  
  SSL2_CK_IDEA  
  SSL2_CK_RC2  
  TLS_RSA_WITH_RC4_128_SHA  
  TLS_RSA_WITH_RC4_128_MD5  
  SSL2_CK_RC4  
  TLS_DHE_RSA_WITH_DES_CBC_SHA  
  TLS_DHE_DSS_WITH_DES_CBC_SHA  
  TLS_RSA_WITH_DES_CBC_SHA  
  SSL2_CK_DES  
  TLS_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA  
  TLS_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA  
  TLS_RSA_EXPORT_WITH_DES40_CBC_SHA  
  TLS_RSA_EXPORT_WITH_RC2_CBC_40_MD5  
  SSL2_CK_RC2_EXPORT40  
  TLS_RSA_EXPORT_WITH_RC4_40_MD5  
  SSL2_CK_RC4_EXPORT40  
  Unknown value 0xff

This particular proxy is still compatible to SSLv2 which is no longer considered secure. In fact, Mozilla disabled SSLv2 support staring with Firefox 8 which was released in November 2011. Other browser vendors did it years ago as well.

https://palant.de/2014/05/27/proxies-breaking-up-ssl-connection-yes-all-the-time


Sta's Malolepszy: Pseudolocales for Firefox OS

Вторник, 27 Мая 2014 г. 19:41 + в цитатник

With bug 900182 fixed, it is now possible to enable pseudolocales in developer builds of Firefox OS. Pseudolocales are based on US English and are automatically generated according to strategies defined in build/l10n.js.
Presently, two pseudolocales can be enabled in custom Gaia builds:

http://informationisart.com/27


Jennie Rose Halperin: Resource list for libraries and coding

Вторник, 27 Мая 2014 г. 17:59 + в цитатник

I’ve been working on this for a little while and wanted to invite all librarians, tumblrians, archivists, and others to contribute!

https://librarianistas.etherpad.mozilla.org/librarians-and-coding (pw: library-code)  feel free to hack on it! Comment away if you have any questions.

 

http://jennierosehalperin.me/resource-list-for-libraries-and-coding/


Jared Wein: New in Firefox Nightly: Experimenting with context menus

Вторник, 27 Мая 2014 г. 17:00 + в цитатник

Starting today, users of Firefox Nightly will see a new look to the classic context menu.

New context menu

Context menus on desktop browsers have changed very little since Firefox 1.0 was introduced. Meanwhile, new devices have brought new concepts to context menus. The context menu on Firefox for Android is much more graphical, showing recognizable symbols at a glance.

Context menu in Firefox for Android

Switching frequently used menuitems to their iconic forms can improve the usability of the menu, as it can make it easier to find the menuitems at a glance as well as click on. One way to visualize the difference is by performing what is known as a “squint test”. The image on the left is the old Firefox context menu, and the image on the right is the new Firefox context menu.

Squint test of old (left) vs. new (right) context menu (Gaussian blur=3)

Squint test of old (left) vs. new (right) context menu (Gaussian blur=3)

Looking at the squint test above, not only is it easier to see the actions of the buttons at the top, but we can also see that the new menu feels a bit leaner.

We don’t have plans to switch all menuitems over to their iconic forms, mainly because many menuitems lack a well-understood graphical metaphor. We’ll keep experimenting with our context menus, hopefully adding the ability to customize them just like the rest of Firefox.

Known issues: The context menus found in today’s Firefox Nightly are still missing a couple finishing touches that we are going to follow up with:

  • The icons being used are not the right size and are lacking HiDPI versions
  • The bookmark star is not shown as filled-in when the page being right-clicked on is already bookmarked
  • OSX is missing the inverted icons, currently showing grey icons on a blue-hovered background


Tagged: firefox, mozilla, planet-mozilla, usability

http://msujaws.wordpress.com/2014/05/27/experimenting-with-context-menus/


Sylvestre Ledru: Changes Firefox 30 beta6 to beta7

Вторник, 27 Мая 2014 г. 16:43 + в цитатник
  • 31 changesets
  • 49 files changed
  • 435 insertions
  • 305 deletions

ExtensionOccurrences
js10
jsm7
java7
cpp7
html6
h4
xml3
ini2
sh1
json1
css1

ModuleOccurrences
dom11
toolkit10
mobile10
layout5
browser4
js3
content2
services1
hal1
build1

List of changesets:

Ben TurnerBug 1003766, StopSyncLoopRunnable::Cancel should call base class Cancel. r=mrbkap, a=lsblakk. - 7d980d9af355
Boris ZbarskyBug 976920 - Mostly back out Bug 932322 for now; only define the unforgeable properties on the window object itself. r=jst, a=lsblakk - 1c26f6798184
Drew WillcoxonBug 998303 - browser/base/content/test/general/browser_urlbar_search_healthreport.js attempts to connect to www.google.com. r=mak, a=test-only - 2b437d292f56
Nathan FroydBug 1010322 - Change toolkit/mozapps/extensions/test/browser/ tests to point at actual http servers. r=jmaher, a=test-only - 8438a548150d
Ben TurnerBug 999274 - Wait for the last runnable before calling ShutdownScriptLoader. r=sicking, a=abillings - 50428e91f0bc
Daniel HolbertBug 1000185 - Part 1: Perform synchronous SMIL sample after registering with refresh driver, not before, for consistency. r=birtles, a=abillings - 542f83ec6345
Daniel HolbertBug 1000185 - Part 2: Add a bool to keep track of whether nsSMILAnimationController instances are registered with a refresh driver. r=birtles, a=abillings - cb78c3777143
Marco BonardoBug 992901 - Not all bookmarks are saved in the backup JSON or HTML. r=mano a=sylvestre - 3fb029a11c05
Jim ChenBug 993261 - Remove legacy code for redirecting key events to URL bar. r=lucasr, a=sledru - 869aefb78e22
Alexandre LissyBug 1000337 - Make NotificationStorage cache origin-aware. r=mhenretty, a=abillings - d520b0344613
D~ao GottwaldBug 987859 - TabsToolbar margin needs to be dropped when entering fullscreen mode rather than when the sizemode attribute changes, which is too late. r=gijs, a=sledru - 2925e9a0a33d
Benjamin SmedbergBug 959356 - Fix the spelling and type of the isWow64 measurement. r=rnewman, a=sledru - ff1925aa6f85
Marco BonardoBug 997030 - Don't encodeURI twice in bookmarks.html. r=mano a=sylvestre - 05baa07365d9
Lukas Blakkupdating EARLY_BETA_OR_EARLIER a=release-mgmt - 77795a696555
Matteo FerrettiBug 980714 - Remove blinking caret in panel text. r=gozala, a=lsblakk - 4363817b56ca
Randell JesupBug 921622 - AudioStream rework. r=padenot, a=lsblakk - eaa2b716ce89
Ryan VanderMeulenBacked out changeset eaa2b716ce89 (Bug 921622) for mochitest-1 asserts. - b7913b826440
Blair McBrideBug 1012526 - UITour.jsm only registers with UITelemetry when it's lazily imported on-demand. r=mconley a=lsblakk - b19932497b46
Randell JesupBug 921622 - AudioStream rework. r=padenot, a=lsblakk - 3090db8c413f
Mats PalmgrenBug 1007065 - Don't apply the special -moz-hidden-unscrollable clipping on nsTextControlFrame since it always has an anonymous scroll frame that deals with overflow. r=roc, a=lsblakk - 36df173cb6a2
Margaret LeibovicBug 1009473 - Remove padding around list item images. r=lucasr, a=lsblakk - 841a1b085b5b
Margaret LeibovicBug 1009473 - Use lighter gray for item descriptions. r=lucasr, a=lsblakk - 82c33d14844a
Asaf RomanoBug 1003839 - Live bookmark is still created despite cancelling the subscription. r=mak, a=lsblakk - 9b9c4281ccb2
Nicolas B. PierronBug 1013922 - Avoid flattenning strings after each concatenation. r=jorendorff, a=lsblakk - af6bb6bacb0e
Myk MelezBug 991394 - Actually rename profiles.ini sections when removing profile. r=mfinkle, a=lsblakk - eac674ed7cfe
Ted MielczarekBug 1011859 - Order gamepad axes properly. r=jimm, a=sledru - 0bcc74404878
Benjamin BouvierBug 1010747 - Part 1: Cleanups and factor out float32 specialization for unary instructions returning int32. r=jandem, a=sledru - 586ed41fa2d1
Benjamin BouvierBug 1010747 - Part 2: Implement Ceil (floating-point) -> int32 in Ion. r=sunfish, r=mjrosenb, a=sledru - 80950d72bd71
Benjamin BouvierBug 1010747 - Part 3: Factor out floating-point conversion to int32 and bailout code. r=sunfish, a=sledru - 0db12290df12
Ryan VanderMeulenBacked out changesets 0db12290df12, 80950d72bd71, and 586ed41fa2d1 (Bug 1010747) for landing without approval. - a89aa1e3e367
Benjamin BouvierBug 1010747 - Don't inline Ceil when input is a FP value and output is an Int32. r=jandem, a=sledru - d3f2e54cf39c

Previous changelogs:

http://sylvestre.ledru.info/blog/2014/05/27/changes-firefox-30-beta6-to-beta7


Byron Jones: happy bmo push day!

Вторник, 27 Мая 2014 г. 10:26 + в цитатник

the following changes have been pushed to bugzilla.mozilla.org:

  • [1013953] Update the researchers.html.tmpl page to link to mhoye automated sanitized database dumps
  • [993223] Notify Review Board when a bug is made confidential
  • [1003950] automatically disable accounts based on the number of comments tagged as “abusive”
  • [1014374] concatenate and slightly minify css files
  • [1013760] Add “secure mail” metadata to email headers
  • [1009216] Add link to a wiki page describing common whiteboard tags
  • [1015290] Fix typo on Reps Mentorship Form
  • [1003386] Create new “Mozilla Foundation Operations” product
  • [1013788] it’s possible to get bugzilla to redirect to any url by setting the content-type of an attachment after uploading it

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

http://globau.wordpress.com/2014/05/27/happy-bmo-push-day-95/


Mitchell Baker: Panel on Internet Governance Mechanisms

Вторник, 27 Мая 2014 г. 09:39 + в цитатник

Internet Governance has become a much more active topic of discussion recently, as I described a bit in this previous post on NetMundial.  As part of the Internet Governance activities of the last 8 or 9 months, I have participated as a member of the Panel on Global Internet Cooperation and Governance Mechanisms.  This  panel was convened in late 2013 by ICANN and the World Economic Forum (WEF), chaired by President Toomas Ilves of Estonia, vice-chaired by Vint Cerf, and infused by Fadi Chehad'e, the president of ICANN, and supported by the Annenberg Retreat at Sunnylands.  The Panel has just completed its work, releasing its final report on how to evolve the Internet Governance ecosystem.  The press release is here.  There are also comments by some of the Panelists, which came out of interviews done with the Panelists at the various panel meetings (the questions have been edited out of the final versions).  In some cases, including mine, the comments cover a range of topics broader than the Panel.

In this post I’m going to set out a couple thoughts about the process.  It was a first for me to participate in a panel like this chaired by a sitting Head of State.  This reflects the very high degree of commitment to an open internet by President Ilves, which is exciting to see in action.  The report adopts the principles set out in NetMundial whole-heartedly.  This was interesting, it is an example of people evaluating a ton of our own work and effort and pride of ownership and happily setting it aside to participate in something bigger.  The principles the Panel had put together were very similar to those of the NetMundial document.  Not exactly the same, but Panel members clearly saw our core outlook and goals reflected in the NetMundial principles.  And so we adopted them as our own, and view this as a huge success.  I am proud of how we did this.  It’s so easy to grow attached to one’s own work for its own sake.  It’s easy to convince oneself that one’s own words or style or approach or work is subtly better, and should be maintained and promoted.  It’s *so* easy to create fragmentation when unity is key.

I also am drawn to this paragraph in the preamble: “The Panel’s report is based on rough consensus.  The views represented in this report do not necessarily reflect the views of the conveners or of all individual Panel members.”  I know the latter sentence is not so uncommon.  It’s not that rare to have people decide that the overall result is good and they are willing to attach their names to it, even if some part is imperfect or weak.  So there’s not much new there.  What struck me is the phrase “based on rough consensus.”  A number of Panel members are very deeply involved in core Internet operations and protocols: the Panel included the head of ICANN, the head of the Internet Society (home of IETF), and of course Vint Cerf.  A number of the rest of us have been almost that deep in Internet technology for a good while.  So the idea of “rough consensus and running code” is deep into the mindset of many Panelists.  I can’t speak for others of course but I saw a bunch of the traits that are important in a rough consensus in action here.  We started with a bunch of diverse views on the panel, combined with a determination to move forward.  We identified key areas of fundamental agreement.
– Multi-stakeholderism.
– Distributed governance, not a single centralized authority.
– Solutions and organizations that develop organically by people working on the problem.
– Building knowledge and capacity so that more people can participate knowledgeably.

“Rough consensus and running code” isn’t a panacea of course.  Like almost every approach, there are plenty of ways in which it can be corrupted and derailed.  When it does work however, it’s extremely powerful.  It gets people aligned, moving in the same direction with the same general principles to a shared goal, and empowers people to go make things happen.

The final report itself sets out suggestion for building on the work of NetMundial.  Some people undoubtedly want the final report to go much further in proposing solutions, and processes to follow.  Others were very focused on this document as a second stage building on NetMundial, and encourage more to come.  I personally am of two minds.  On the one hand I’d love to be able to point to concrete processes or distributed governance mechanisms and say “see what we’ve down and how much more we can do.”  On the other hand, one big part of the goal is to describe the multistakeholder, distributed governance approach to citizens and governments and policy makers who are relatively new to Internet Governance.  There is very explicit mention in the report’s Introduction.  Given this, pointing to new solutions might not be the right approach.  Pointing the direction and figuring out new solutions together might be a much more long-lasting approach.

In  any case, I am eager to see multi-stakeholder Internet Governance strengthened.  And deeply interested in building a governance system where citizens and civil society organizations are valued participants and leaders.

https://blog.lizardwrangler.com/2014/05/26/panel-on-internet-governance-mechanisms/


Joshua Cranmer: Why email is hard, part 6: today's email security

Вторник, 27 Мая 2014 г. 04:32 + в цитатник
This post is part 6 of an intermittent series exploring the difficulties of writing an email client. Part 1 describes a brief history of the infrastructure. Part 2 discusses internationalization. Part 3 discusses MIME. Part 4 discusses email addresses. Part 5 discusses the more general problem of email headers. This part discusses how email security works in practice.

Email security is a rather wide-ranging topic, and one that I've wanted to cover for some time, well before several recent events that have made it come up in the wider public knowledge. There is no way I can hope to cover it in a single post (I think it would outpace even the length of my internationalization discussion), and there are definitely parts for which I am underqualified, as I am by no means an expert in cryptography. Instead, I will be discussing this over the course of several posts of which this is but the first; to ease up on the amount of background explanation, I will assume passing familiarity with cryptographic concepts like public keys, hash functions, as well as knowing what SSL and SSH are (though not necessarily how they work). If you don't have that knowledge, ask Wikipedia.

Before discussing how email security works, it is first necessary to ask what email security actually means. Unfortunately, the layman's interpretation is likely going to differ from the actual precise definition. Security is often treated by laymen as a boolean interpretation: something is either secure or insecure. The most prevalent model of security to people is SSL connections: these allow the establishment of a communication channel whose contents are secret to outside observers while also guaranteeing to the client the authenticity of the server. The server often then gets authenticity of the client via a more normal authentication scheme (i.e., the client sends a username and password). Thus there is, at the end, a channel that has both secrecy and authenticity [1]: channels with both of these are considered secure and channels without these are considered insecure [2].

In email, the situation becomes more difficult. Whereas an SSL connection is between a client and a server, the architecture of email is such that email providers must be considered as distinct entities from end users. In addition, messages can be sent from one person to multiple parties. Thus secure email is a more complex undertaking than just porting relevant details of SSL. There are two major cryptographic implementations of secure email [3]: S/MIME and PGP. In terms of implementation, they are basically the same [4], although PGP has an extra mode which wraps general ASCII (known as "ASCII-armor"), which I have been led to believe is less recommended these days. Since I know the S/MIME specifications better, I'll refer specifically to how S/MIME works.

S/MIME defines two main MIME types: multipart/signed, which contains the message text as a subpart followed by data indicating the cryptographic signature, and application/pkcs7-mime, which contains an encrypted MIME part. The important things to note about this delineation are that only the body data is encrypted [5], that it's theoretically possible to encrypt only part of a message's body, and that the signing and encryption constitute different steps. These factors combine to make for a potentially infuriating UI setup.

How does S/MIME tackle the challenges of encrypting email? First, rather than encrypting using recipients' public keys, the message is encrypted with a symmetric key. This symmetric key is then encrypted with each of the recipients' keys and then attached to the message. Second, by only signing or encrypting the body of the message, the transit headers are kept intact for the mail system to retain its ability to route, process, and deliver the message. The body is supposed to be prepared in the "safest" form before transit to avoid intermediate routers munging the contents. Finally, to actually ascertain what the recipients' public keys are, clients typically passively pull the information from signed emails. LDAP, unsurprisingly, contains an entry for a user's public key certificate, which could be useful in large enterprise deployments. There is also work ongoing right now to publish keys via DNS and DANE.

I mentioned before that S/MIME's use can present some interesting UI design decisions. I ended up actually testing some common email clients on how they handled S/MIME messages: Thunderbird, Apple Mail, Outlook [6], and Evolution. In my attempts to create a surreptitious signed part to confuse the UI, Outlook decided that the message had no body at all, and Thunderbird decided to ignore all indication of the existence of said part. Apple Mail managed to claim the message was signed in one of these scenarios, and Evolution took the cake by always agreeing that the message was signed [7]. It didn't even bother questioning the signature if the certificate's identity disagreed with the easily-spoofable From address. I was actually surprised by how well people did in my tests—I expected far more confusion among clients, particularly since the will to maintain S/MIME has clearly been relatively low, judging by poor support for "new" features such as triple-wrapping or header protection.

Another fault of S/MIME's design is that it makes the mistaken belief that composing a signing step and an encryption step is equivalent in strength to a simultaneous sign-and-encrypt. Another page describes this in far better detail than I have room to; note that this flaw is fixed via triple-wrapping (which has relatively poor support). This creates yet more UI burden into how to adequately describe in UI all the various minutiae in differing security guarantees. Considering that users already have a hard time even understanding that just because a message says it's from example@isp.invalid doesn't actually mean it's from example@isp.invalid, trying to develop UI that both adequately expresses the security issues and is understandable to end-users is an extreme challenge.

What we have in S/MIME (and PGP) is a system that allows for strong guarantees, if certain conditions are met, yet is also vulnerable to breaches of security if the message handling subsystems are poorly designed. Hopefully this is a sufficient guide to the technical impacts of secure email in the email world. My next post will discuss the most critical component of secure email: the trust model. After that, I will discuss why secure email has seen poor uptake and other relevant concerns on the future of email security.

[1] This is a bit of a lie: a channel that does secrecy and authentication at different times isn't as secure as one that does them at the same time.
[2] It is worth noting that authenticity is, in many respects, necessary to achieve secrecy.
[3] This, too, is a bit of a lie. More on this in a subsequent post.
[4] I'm very aware that S/MIME and PGP use radically different trust models. Trust models will be covered later.
[5] S/MIME 3.0 did add a provision stating that if the signed/encrypted part is a message/rfc822 part, the headers of that part should override the outer message's headers. However, I am not aware of a major email client that actually handles these kind of messages gracefully.
[6] Actually, I tested Windows Live Mail instead of Outlook, but given the presence of an official MIME-to-Microsoft's-internal-message-format document which seems to agree with what Windows Live Mail was doing, I figure their output would be identical.
[7] On a more careful examination after the fact, it appears that Evolution may have tried to indicate signedness on a part-by-part basis, but the UI was sufficiently confusing that ordinary users are going to be easily confused.

http://quetzalcoatal.blogspot.com/2014/05/why-email-is-hard-part-6-todays-email.html



Поиск сообщений в rss_planet_mozilla
Страницы: 472 ... 50 49 [48] 47 46 ..
.. 1 Календарь