Adam Okoye: Figuring things out |
So far I’ve really been enjoying the challenge of figuring out how to fix the bugs/create the features that I have been tasked with. I’ve been finding that it takes about a day for me to really grasp what to do and then once I’m able to figure that out things feel a bit smoother. My mentor (Will Kahn-Greene) and I have a schedule in terms of the time it should take for me to do short, medium, and large bugs. So far I’ve more or less been able to stick to that schedule.
The bug that I’m nearly finished with right now involves writing code that will determine which variation of the thank you page people will be sent to after giving feedback on input.mozilla.org. I had a bit of trouble the first day, but after talking to my Will, looking at documentation, and searching Safari Books it ended up being pretty straight forward. What was nice about this bug is that I was able to tame the discouragement that I was feeling in the beginning and, low and behold, it looks like things worked out. I’m also quickly learning to try not to over think things – that simple solutions do exist.
|
Gervase Markham: Avoid Mystery Process |
Although the discussions around adding any particular new committer must be confidential, the rules and procedures themselves need not be secret. In fact, it’s best to publish them, so people realize that the committers are not some mysterious Star Chamber, closed off to mere mortals, but that anyone can join simply by posting good patches and knowing how to handle herself in the community. In the Subversion project, we put this information right in the developer guidelines document, since the people most likely to be interested in how commit access is granted are those thinking of contributing code to the project.
— Karl Fogel, Producing Open Source Software
http://feedproxy.google.com/~r/HackingForChrist/~3/lowo6-eieWU/
|
Andrew Halberstadt: The New Mercurial Workflow - Part 2 |
This is a continuation of my previous post called The New Mercurial Workflow. It assumes that you have at least read and experimented with it a bit. If you haven't, stop right now, read it, get set up and try playing around with bookmarks and mozreview a bit.
I've had several requests for examples of more advanced usage with this workflow. The previous post covered the basics, but it skipped many important concepts for the sake of brevity. Well that and the fact that I'm still figuring out all of this myself. Rather than a step by step tutorial, each section is its own independent concept which you can either use or choose to ignore. After all, there is more than one way to skin a cat (apparently), and I make no claims that my way is the best.
Probably the biggest thing I left out from the last post, is how to push to try. The easiest way is to simply edit the commit message of the top most commit in your bookmark:
$ hg update my_feature
$ hg commit --amend -m "try: -b o -p linux -u mochitest-1 -t none"
$ hg push -f try
However this method isn't ideal for two reasons:
A better approach is to push an empty changeset with try syntax on top of your bookmark. The bad
news is that there is no good way to do this without using mq
. The good news is that there is an
extension that will make this a lot easier (though you'll still need mq
installed in your hgrc).
I'd recommend sfink's trychooser extension because it lets you choose syntax via a curses ui,
or manually (note the original extension of the same name by pbiggar is different). After cloning
and installing it, push to try:
$ hg update my_feature
$ hg trychooser
This opens a curses ui from which you can build your syntax (note it may be slightly out of date). Alternatively, just specify the syntax manually:
$ hg update my_feature
$ hg trychooser -m "try: -b o -p linux -u mochitest-1 -t none"
There is a bug on file to move this extension into the standard version-control-tools repository.
The mozreview folks are also working on the ability to autoland changesets pushed to review on try, which should greatly simplify the common use case.
In the last post, I showed you an example of addressing review comments by making an additional commit and then squashing it later. But what if you push multiple commits to review and you intend to land them all separately, without squashing them at the end? Here is the setup:
hg update my_feature
... add foo ...
hg commit -m "Bug 1234567 - Part 1: add the foo api"
... add bar ...
hg commit -m "Bug 1234567 - Part 2: add the bar api"
hg push -r my_feature review
Now you add a reviewer for each of the two commits and publish. Except the reviewer gives an r- to the first commit and r+ to the second. Pushing a third commit to the review will make it difficult to squash later. It is possible with rebasing, but there is a better way.
Mercurial has a new(ish) feature called Changeset Evolution. I won't go into detail here, but
you know how with git you can mutate history and then force push with -f
and people say don't do
that since it could leave someone else in an unrecoverable state? This is because when you mutate
history in git, the old changeset is lost forever. With changeset evolution, the old changesets are
not thrown out, instead they are marked obsolete. It is then possible to push mutated history and
remote repositories can use these obsolescence markers to "do the right thing" without putting
someone else into an unrecoverable state. The mozreview repository is set up to use obsolescence
markers, which means mutating history and pushing to review is perfectly acceptable.
The first step is to clone and install the evolve extension (update to the stable branch). Going back to the original example, we need to amend the first commit of our review push while leaving the second one intact. First, let's update to the commit we'll be amending:
$ hg update "my_feature^"
... fix review comments ...
$ hg commit --amend
$ hg push -r my_feature review
Remember in the last section I said --amend would cause you to lose your old commit? In this case
evolve
has actually modified the behaviour of --amend to mark the old changeset obsolete instead.
The review repository can then use this information to see that you have amended an existing commit
and update the review request accordingly. The end result is the review request will still only
contain two commits, but a second entry on the push slider will show up, allowing the reviewer to
see the original diff, the full diff and the interdiff with just the review fixes.
Amending is just one way to mutate history with evolve
. You can also prune
(delete), uncommit
and fold
(squash). If you are interested in how evolve
works, or want more details on what
it can do, I'd highly recommend this tutorial.
One thing that took me a little while to understand, was that bookmarks are not the same as git branches. Yes, they are very similar and can be used in much the same way. But a bookmark is just a label that automatically updates itself when activated. Unlike a git branch, there is no concept of ownership between a bookmark and a commit. This can lead to some confusion when rebasing bookmarks on a multi-headed repository like the unified firefox repo we set up in the previous post. Let's see what might happen:
$ hg pull -u inbound
$ hg rebase -b my_feature -d inbound
$ hg pull -u fx-team
$ hg rebase -b my_feature -d fx-team
abort: can't rebase immutable changeset ad2042b4c668
What happened here? The important thing to understand is that the -b
argument to rebase
doesn't
stand for bookmark
, it stands for base
. You are telling hg to take every changeset from
my_feature
all the way back to the common ancestor with fx-team
and rebase them all on top of
fx-team
. In this case, that includes all the public changesets that have landed on inbound
, but
haven't yet landed on fx-team
. And you can't rebase public changesets (rightfully so). Luckily,
it's still possible to rebase bookmarks automatically using revsets:
$ hg rebase -r "reverse(only(my_feature) and draft())" -d fx-team
This same revset can be used to log a bookmark and only that bookmark (log -f is useful, but includes all ancestors of the bookmark, so it's not always obvious where the bookmark starts):
$ hg log -r "reverse(only(my_feature) and draft())"
The revset is somewhat long, so it helps to add an alias to your ~/.hgrc:
[revsetalias]
b($1) = reverse(only($1) and draft())
Now you can use it like so:
$ hg log -r "b(my_feature)"
This revset works for most simple cases, but it isn't perfect:
-r "b(my_feature)"
than it is to write -r my_feature
.These shortcomings were annoying enough to me that I wrote an extension called logbook.
Essentially if you pass in -r
to a supported command, logbook will replace the
bookmark's revision with a revset containing all changesets "in" the bookmark. So far log
,
rebase
, prune
and fold
are wrapped by logbook. Logbook will also detect if bookmarks are based
on top of one another, and only use commits that actually belong to the bookmark you want to see.
For example, the following does what you'd expect:
$ hg rebase -r bookmark_2 -d bookmark_1
$ hg rebase -r bookmark_3 -d bookmark_2
$ hg log -r bookmark_1
$ hg log -r bookmark_2
$ hg log -r bookmark_3
Because logbook only considers draft changesets, the following won't print anything:
$ hg update central
$ hg bookmark new_bookmark
$ hg log -r new_bookmark
If you actually want to treat the bookmark as a label to a revision instead, it's still possible by escaping the bookmark with a period:
$ hg log -r .my_feature
Logbook likely has some bugs to work out, so let me know if you run into problems using it. There are also likely other commands that could make use of the revset. I'll add support for them either as I stumble across them or on request.
Finally I'd like to briefly mention hg shelve
. It is more or less identical to git stash
and is
an official extension. To install it add the following to ~/.hgrc:
[extensions]
shelve=
I mostly use it for debug changes that I don't want to commit, but want to test both with and without a particular change. My basic usage is:
... add debug statements ...
... test ...
hg shelve
hg update
hg unshelve
... test ...
hg revert -a
That more or less wraps up what I've learnt since the first post and I can't remember any other pain points I had to work around. This workflow is still based on a lot of new tools that are still under heavy development, but all things considered I think it has gone remarkably smoothly. The setup involves installing a lot of extensions, but this should hopefully get better over time as they move into core mercurial or version-control-tools. Have you run into any other pain points using this workflow? If so, have you solved them?
|
Michelle Thorne: Diving into PADI’s learning model |
For the last few years, Joi Ito has been blogging about learning to dive with PADI. It wasn’t until I became certified as a diver myself that I really understood how much we can learn from PADI’s educational model.
Here’s a summary of how PADI works, including ideas that we could apply to Webmaker.
With Webmaker at the moment, we’re testing how to empower and train local learning centers to teach the web on an ongoing basis. This is why I’m quite interested in how other certification and learning & engagement models work.
The Professional Association of Diving Instructors (PADI) has been around since the late 1960’s. It trained over 130,000 diving instructors to issue millions of learning certifications to divers around the world. Many instructors run their own local businesses, who’s main service is to rent out gear and run tours for certified divers, or to certify people learning how to dive.
Through its certification service, PADI became the diving community’s de facto standard-bearer and educational hub. Nearly all diving equipment, training and best practices align with PADI.
No doubt, PADI is a moneymaking machine. Every rung of their engagement ladder comes with a hefty price tag. Diving is not an access-for-all sport. For example, part of the PADI training is about learning how to make informed consumer choices about the dive equipment, which they will later sell to you.
Nevertheless, I do think there is lots of learn from their economic and engagement model.
PADI uses blended learning to certify its divers.
They mix a multi-hour online theoretical part (regrettably, it’s just memorization) with several in-person skills trainings in the pool and open water. Divers pay a fee ($200-500) to access the learning materials and to work with an instructor. They also send you a physical kit with stickers, pamphlets and a logbook you can use on future dives.
Dive instructors teach new divers in very small groups (mine was 1:1 to maximum of 1:3). It’s very hands-on and tailored to the learner’s pace. Nevertheless, it has a pretty tight script. The instructor has a checklist of things to teach in order to certify the learner, and you work through those quite methodically. The online theory complements the lessons in the water, although for my course they could’ve cut about 3 hours of video nerding out on dive equipment.
There is room for instructor discretion and lots of local adaptation. For example, you are taught to understand local dive practices and conditions, like currents and visibility, which inform how you adapt the PADI international diving standard to your local dives. This gives the instructor some agency and adaptability.
PADI makes its point of view very clear. Their best practices are so explicit, and oft-repeated, that as a learner you really internalize their perspective. In the water, you immediately flag any detraction from The PADI Way.
Mainly, these mantras are for your own safety: breathe deeply and regularly, always dive with a buddy, etc. But by distilling their best practices so simply and embedding them deeply and regularly in the training, as a learner you become an advocate for these practices as well.
The buddy system is particularly interesting. It automatically builds in peer learning and also responsibility for yourself and your buddy. You’re taught to rely on each other, not the dive instructor. You solve each others problems, and this helps you become empowered in the water.
Furthermore, PADI makes its learning pathways very explicit and achievable. After doing one of the entry level certification, Open Water Diving, I feel intrigued to take on the next level and trying out some of the specializations, like cave diving and night diving.
Throughout the course, you see glimpses of what is possible with further training. You can see more advanced skills and environments becoming unlocked as you gather more experience. The PADI system revolves around tiers of certifications unlocking gear and new kinds of dives, which they do a good job of making visible and appealing.
What’s even more impressive is that the combination of the buddy/peer learning model and the clear pathways makes becoming an instructor seem achievable and aspirational—even when you just started learning.
As a beginner diver, I already felt excited by the possibility of teaching others to dive. Becoming a PADI instructor seems cool and rewarding. And it feels very accessible within the educational offering: you share skills with your buddy; with time and experience, you can teach more skills and people.
Speaking of instructors, PADI trains them in an interesting way as well. Like new divers, instructors are on a gamification path: you earn points for every diver you certify and for doing various activities in the community. With enough points, you qualify for select in-person instructor trainings or various gear promotions.
Instructors are trained in the same model that they teach: it’s blended, with emphasis on in-person training with a small group of people. You observe a skill, then do it yourself, and then teach it. PADI flies about 100 instructors-to-be to a good dive destination and teaches them in-person for a week or so. Instructors pay for the flights and the training.
At some point, you can earn enough points and training as an instructor that you can certify other instructors. This is the pinnacle of the PADI engagement ladder. We’re doing something similar with Webmaker: the top of the engagement ladder is a Webmaker Super Mentor. That’s someone who trains other mentors. It’s meta, and only appeals to a small subset of people, but it’s a very impactful group.
What’s the role of PADI staff? This is a question we often ask ourselves in the Webmaker context. Mainly, PADI staff are administrators. Some will visit local dive centers to conduct quality control or write up new training modules. They are generally responsible for coordinating instructors and modeling PADI practices.
The local diver centers and certified instructors are PADI’s distribution model.
Divers go to a local shop to buy gear, take tours and trainings. The local shop is a source of economic revenue for the instructors and for PADI. As divers level up within the PADI system, they can access more gear and dive tours from these shops.
Lastly, PADI imparts its learners with a sense of stewardship of the ocean. It empowers you in a new ecosystem and then teaches you to be an ambassador for it. You feel responsibility and care for the ocean, once you’ve experienced it in this new way.
Importantly, this empowerment relies on experiential learning. You don’t feel it just by reading about the ocean. It’s qualitatively different to have seen the coral and sea turtles and schools of fish yourself.
The theory and practice dives in the pool ready you for the stewardship. But you have to do a full dive, in the full glory of the open water, to really get it.
I think this is hugely relevant for Webmaker as well: it’s all good to read about the value of the open web. But it’s not until you’re in the midst of exploring and making in the open web do you realize how important that ecosystem is. Real experience begets responsibility.
PADI encourages several ways for you to give back and put your stewardship to use: pick up litter, do aquatic life surveys, teach others about the waters, etc.
They show you that there is a community of divers that you are now a part of. It strikes a good balance between unlocking experiences for you personally and then showing you how you can act upon them to benefit a larger effort.
As mentioned, there are many shortcomings to the PADI system. It’s always pay-to-play, it’s educational materials are closed and ridiculously not remixable, it’s not accessible in many parts of the world due to (understandable) environmental limitations. Advocacy for the ocean is a by-product of their offering, not its mission.
Still, aspects of their economic and learning model are worth considering for other social enterprises. How can instructors make revenue so they can teach full-time and as a career? How can gear be taught and sold so that divers get quality equipment they know how to use? How can experiential learning be packaged so that you know the value of what you’re getting and skills along the way?
I’m pretty inspired by having experienced the PADI Open Water Diving certification process. In the coming months, I’d like to test and apply some of these practices to our local learning center model, the Webmaker Clubs.
If you have more insights on how to do this, or other models worth looking at, share them here!
http://michellethorne.cc/2015/01/diving-into-padis-learning-model/
|
Benoit Girard: CallGraph Added to the Gecko Profiler |
In the profiler you’ll now find a new tab called ‘CallGraph’. This will construct a call graph from the sample data. This is the same information that you can extract from the tree view and the timeline but just formatted so that it can be scanned better. Keep in mind that this is only a call graph of what occurred between sample points and not a fully instrumented Call Graph dump. This has a lower collection overhead but missing anything that occurs between sample points. You’ll still want to use the Tree view to get aggregate costs. You can interact with the view using your mouse or with the W/A/S/D equivalent keys of your keyboard layout.
Big thanks to Victor Porof for writing the initial widget. This visualization will be coming to the devtools profiler shortly.
https://benoitgirard.wordpress.com/2015/01/13/callgraph-added-to-the-gecko-profiler/
|
Mic Berman: Practical guide to managing people |
I have often been asked to share some common situations new team leads face and that come up in my coaching practice. Here are a few that I hope to be instructive but not definitely not comprehensive. (It's an excerpt from my talk at 'Software Team Meet up'. There are many wonderful and much longer guides to being an effective first-time manager; my favourite book on the topic is ‘First Break All the Rules”.
In my humble opinion, the primary purpose of your role as manager is to keep your people engaged, productive and growing. Here are the ways I think that can be done effectively:
Effective 1:1's
Ensure your folks are challenged, engaged and highly productive
It’s an opportunity for mutual feedback
Creates deeper and clearer mutual understanding of motivation
No surprises at formal checkpoints (performance reviews)
It’s investing time for them and you
Keep regular and consistent meeting times and agenda - it’s an important indicator of your commitment to them
Agenda - powerful questions:
1 Version:
• How do you think you are doing?
• How do I think you are doing?*
• How will you know? what information do you need? then what/what’s next?
• How is the team doing?
2 Version:
• What was the highlight of your week?
• What could have gone better?
• Is there anything blocking you, or that you need from me?
• How am I doing?
• How are you doing?
• Is there anything else I haven’t asked about that I should have
For more depth behind this model check out Fred Kaufman’s “Conscious Business” model for Communication: Listening, Ask Questions, Validating, Summarize, Express, Negotiate, Commit(ments)
Setting stretch goals to achieve your organizations Objectives & Key Results
A good number is 3-5 goals at any one time. Structure your direct reports’ goals so that they support your goals, which in turn support your company’s goals. The best goals are a balance of personal development/learning and outcome-based goals that line up to your team's Vision and Organizations’ Vision/Goals.
These goals must be within your and your staff’s control. It’s a collaborative process where you provide the context and you want to give them lots of room to define how they plan to accomplish them. Goals are best when they are specific, reasonable and have a timeline associated with them.
Examples of personal development goals you might pick are: improve my Python coding skills so that I get faster and better and the number of bugs in my code are significantly reduced, find 5 creative ways to acquire customers that result in x more customers over y period of time etc.
Most successful folks focus on breaking down large goals into mini-goals or milestones both because it's easier to manage and because it feels good to see your progress over time.
There are a great many resources around goal setting
• Chapter 2 of Becoming the Evidence-Based Manager
• http://topachievement.com/smart.html
• https://www.gv.com/lib/how-google-sets-goals-objectives-and-key-results-okrs
How to course correct poor performers
When people are experiencing a performance struggle e.g. they are not meeting your expectations or missing clear goals/deadlines, here's what I recommend:
Step 1:
• 1:1 verbal discussion (you and the employee) about issue, and
• use this guide:
Put them at ease
Ask permission to give feedback
Choose a suitable moment
Provide feedback in private
Be specific and observable
• Stick to observable facts
• Avoid judgment, opinions and comments
• Avoid absolutes (e.g., never, always, constantly)
Explain the impact
•Describe specifically how the behavior promotes or hinders desired results
Work together on next steps
• Confirm - you may not have the whole story
•Understand– simple confusion is common
•Help -involve other person in identifying options
If issue issues persists please continue to Step 2
Step 2) Involve HR and/or create a Performance Improvement Plan
By this point the person is no longer surprised by the issue. In my experience people generally choose to either try and improve or acknowledge this isn’t a good fit for them and negotiate a departure.
Step 3) A performance plan sets clear expectations of items needing resolving/repairing and the ultimatum clear-- with a hard date set. For example: 30 days to do X and Y, if not met, then termination. Signature or some other form of clear accountability on part of employee
Step 4) Success — a new skills is built and level of achievement is possible or negotiate their termination - transition to leaving e.g. will they have time to wrap up, do you want them to go, what will fair compensation be (consider how you would want to be treated in the same situation).
Know that many folks in this same situation have turned things around and are now highly successful as a result of this practice. Sometimes the sting of direct feedback is what we need to find focus and performance.
Delegating to continuously build your next team leads
The main principle behind this approach requires that you allow for failure. It will feel incredibly difficult to let go - to feel the sting of watching someone on your team potentially fail. It’s probably among the hardest things to do. This is a four step approach:
Phase 1: requires that you offer support and guidance
Phase 2: tactical instruction, describing the how and why
Phase 3: support and guidance, encouraging and championing their success thus far and course correcting
Phase 4: let them go - celebrate success
A word or few, on adding new team members
1 - hold a 1:1 meeting and cover the following:
a - share the vision of the team,
b- relevant history of your team in the context of your organization
c- discuss a preliminary set of SMART goals for the person e.g., get up to speed on, focus on, etc.
d- set frequency for check-ins on progress and time to get to know each other e.g., weekly or bi-weekly and time of day
2 - Have a fun (whatever fun means for you) team type session to welcome them. in that session you can share with each other this:
a - strengths and what that means to each person e.g., i'm an activator and in this job that = ...
b - learning e.g., I can contribute this great stuff/skills/experience to team and would really like to learn x, y, or z (you could also add dislikes e.g., i hate filing but will do it in a pinch)
c - logistics/style e.g., never call me before 10am cause i'm a late riser; I like direct and timely feedback
Please ask me to provide more insights for you in whatever situation you may find yourself. email me :)
http://michalberman.typepad.com/my_weblog/2015/01/practical-guide-to-managing-people.html
|
Benjamin Kerensa: Support Collab House Indiegogo Campaign |
I wanted to quickly post a simple ask to Mozillians to please share this indiegogo campaign being ran by Mozillian Rockstar Vineel Pindy
who has been a contributor for many years. Vineel is raising money for Collab House, a Collaborative Community Space in India which has been used for many Mozilla India events and other open source projects.
By sharing the link to this campaign or contributing some money to the campaign, you will not only support the Mozilla India community but will further Mozilla’s Mission by enabling communities around the globe that help support our mission.
Lets make this campaign a success and support our fellow Mozillians! If every Mozillian shared this or contributed $5 I bet we could have this funded before the deadline!
|
Kat Braybrooke: Hello, World: Let's (re)Make Networked Art |
Image by MAX CAPACITY
After much second-guessing, I’ve published a big piece on Medium today about my belief that Net(worked) Art still lives as a movement, making the continued existence of cooperative, creative practices of [and by] the web more essential than ever.
This post was based on the visionary work of the crazy web/trans/media folk I’ve been able to dream and build with lately around the world, especially the #ARTOFWEB community who helped bring together the Mozilla Festival's first-ever Art and Culture track to the shores of London this year, and taught me so much in the process.
It’s a 10 minute read in 6 chapters, the perfect amount of time for a coffee. It opened up some pretty interesting debates as it made the rounds this morning, so I’d love to hear your thoughts (and criticisms!). While this is an area I’m passionate about, I am cognizant that I, like everyone in my “post-post net-art”generation, have much more to learn. And in the end, that’s a part of the fun.
"As the hardwares and softwares of computers give us new capabilities… we have to learn to feel with them. If we can’t feel with them, they are only dumb metal claws. Therefore, the vistas of digital art are only as wide as our potential to grasp those possibilities with full human expressiveness.”
— Jim Andrews, “Why I Am A Net Artist”, 2011
|
Michael Kaply: I Used the CCK2 Wizard - What Now? |
One of the most common questions I get asked is what to do with the result that the CCK2 Wizard produces. This post will address that question.
After you've completed your customizations using the CCK2 Wizard, you have two choices: create an extension or use AutoConfig.
Let's start with AutoConfig (which is what I recommend.) AutoConfig is the tried and true method of customizing Firefox that's been around forever. You can read an old post about it here. I'm also working on an AutoConfig eBook that I hope to have out soon.
With AutoConfig, things are quite simple (at least on Windows and Linux). The output of the CCK2 Wizard is a zip file that can be unzipped in the same directory where the Firefox executable is located. It puts all the necessary files in the right places and you can immediately start Firefox and see your customizations. Things are not so good on Mac starting with Firefox 34. AutoConfig is broke right now due to the new Apple signing requirements. We're investigating the best way to fix that.
Your other option with the CCK2 is to generate an extension. This produces an XPI file which can simply be installed in Firefox the same way any other extensions is installed - by dragging and dropping it onto the browser. If you want to deploy the extension you've created, I've documented a number of the different ways you can integrate an extension into Firefox. Each of these methods has positives and negatives - it's up to you to decide what to do for your situation.
Some people might wonder why I don't just have the CCK2 generate a new installer. In my experience, there are so many different ways that people deploy applications that it would not be worth it. In the past, I have documented how to bundle your changes with the Windows installer if you are so inclined.
Hopefully this gets most folks started with the CCK2. Please let me know if I've missed something.
|
Yunier Jos'e Sosa V'azquez: Comparte, llama y juega en Firefox |
Lleg'o el 2015, un a~no que pretende ser bueno para todos y en especial para Mozilla. En este nuevo ciclo, tendremos cosas nuevas relacionadas con el sistema operativo de Mozilla para celulares inteligentes y nuevas versiones de tu navegador favorito, que ya llega a sus 11 a~nos. Entre las caracter'isticas principales que deben arribar este lustro, se encuentra la arquitectura multiproceso, la cual a~nadir'a estabilidad y seguridad a Firefox.
Hello, la funcionalidad que permite realizar videollamadas desde Firefox, ha recibido mejoras y ahora muestra un nuevo modelo de conversaciones basado en habitaciones. Como puedes ver en la imagen, cuando inicias una conversaci'on puedes darle un nombre y se mantiene en la memoria. Cada vez que quieras hablar con la misma persona, podr'an seguir usando el mismo enlace.
Firefox Share te permitir'a compartir lo que desees en la Web r'apida y f'acilmente en tus redes sociales favoritas sin dejar el sitio que est'as visitando. Para a~nadir esta funcionalidad en Firefox, debes visitar la p'agina de Activaci'on y dar clic en la red social que desees. Tambi'en tienes la posibilidad de mover el bot'on Compartir ubicado en la personalizaci'on de Firefox al men'u o la barra de herramientas, aunque, al activar alg'un servicio este debe mostrarse solo.
En Mac OS X Snow Leopard (10.6) se incluy'o soporte para el c'odec H.264 (MP4) en el navegador y ahora se utilizar'an las APIs nativas del sistema.
Para mejorar la autenticaci'on en conexiones encriptadas se implementaron las Claves P'ublicas HTTP Fijas (Public Key Pinning). Con este sistema, los sitios Web pueden indicar cu'ales son las autoridades certificadoras que aseguran que el certificado es v'alido. esto significa que una peque~na lista de autoridades certificadoras aceptables est'an incluidas en Firefox de forma predeterminada. La lista completa de llaves p'ublicas fijas la puedes encontrar en la Wiki de Mozilla. Adem'as, los sitios deben alertar a los usuarios que soportan Public Key Pinning Extension.
El lector PDF incluido en Firefox se actualiz'o a la versi'on 1.0.907 y ahora podr'as gozar de una mejor estabilidad y nuevas caracter'isticas al abrir un documento. Tambi'en se ha mejorado el manejo de los cambios en estilos din'amicos para incrementar la capacidad de respuesta.
Firefox Marketplace, el lugar donde se encuentran alojadas las Aplicaciones Web Abiertas te la posibilidad de filtrar las aplicaciones para sistemas de escritorios (Linux, Windows, Mac) y sepas cu'ales puedes instalar. Cuando accedas al Marketplace debes escoger Apps de escritorio, para instalarlas debes dar clic en Gratis o Instalar y aceptar la instalaci'on.
Para ejecutar una Web App, debes buscarla en el men'u de tu sistema (igual que las dem'as aplicaciones). Desde el Marketplace tambi'en podr'as abrir la aplicaci'on accediendo a tu secci'on Aplicaciones y dar un simple clic en Abrir.
Mientras tanto, los desarrolladores podr'an filtrar los estilos CSS y utilizar WebSocket en Workers.
Para Android tenemos:
Otras novedades:
Si deseas conocer m'as, puedes leer las notas de lanzamiento (en ingl'es).
Puedes obtener esta versi'on desde nuestra zona de Descargas en espa~nol e ingl'es para Linux, Mac, Windows y Android. Recuerda que para navegar a trav'es de servidores proxy debes modificar la preferencia network.negotiate-auth.allow-insecure-ntlm-v1 a true desde about:config.
http://firefoxmania.uci.cu/comparte-llama-y-juega-en-firefox/
|
Soledad Penades: Moving to the evangelism team |
As of yesterday I am in the evangelism team at Mozilla, also known as tech evan / dev rel / what have you. Essentially, spread the word about all the amazing stuff in Mozilla products and also help people build awesome stuff on the Web.
There’s lots of things we want to do, and I’m excited! I also have to go to the Web Components meetup, so I’ll leave you with Potch’s own announcement, as he’s moving to that team too:
Today I get to announce that I'm now a Developer for Mozilla! Yay!
— potch (@potch) January 12, 2015
http://soledadpenades.com/2015/01/13/moving-to-the-evangelism-team/
|
Gregory Szorc: Major bzexport Updates |
The bzexport Mercurial extension - an extension that enables you to easily create new Bugzilla bugs and upload patches to Bugzilla for review - just received some major updates.
First, we now have automated test coverage of bzexport! This is built on top of the version control test harness I previously blogged about. As part of the tests, we start Docker containers that run the same code that's running on bugzilla.mozilla.org, so interactions with Bugzilla are properly tested. This is much, much better than mocking HTTP requests and responses because if Bugzilla changes, our tests will detect it. Yay continuous integration.
Second, bzexport now uses Bugzilla' REST API instead of the legacy bzAPI endpoint for all but 1 HTTP request. This should make BMO maintainers very happy.
Third and finally, bzexport now uses shared code for obtaining Bugzilla credentials. The behavior is documented, of course. Behavior is not backwards compatible. If you were using some old configuration values, you will now see warnings when running bzexport. These warnings are actionable, so I shouldn't need to describe them here.
Please obtain the new code by pulling the version-control-tools repository. Or, if you have a Firefox clone, run mach mercurial-setup.
If you find any regressions, file a bug in the Developers Services :: Mercurial: bzexport component and have it depend on bug 1033394.
Thanks go out to Steve Fink, Ed Morley, and Ted Mielczarek for looking at the code.
http://gregoryszorc.com/blog/2015/01/13/major-bzexport-updates
|
Armen Zambrano: Name wanted for a retrigger-based bisection tool |
http://feedproxy.google.com/~r/armenzg_mozilla/~3/dpfKaP9L-kI/name-wanted-for-retrigger-based.html
|
Yunier Jos'e Sosa V'azquez: Firefox OS en 2015, m'as dispositivos y sorpresas |
Un feliz a~no nuevo y pr'ospero 2015 les deseo a todos los amantes de Firefox y a los que visitan Firefoxman'ia en nombre de la Comunidad Mozilla de Cuba. El 2014 fue un a~no genial para Firefox OS, los lanzamientos realizados en diferentes pa'ises demuestra el inter'es de las operadoras y fabricantes por una alternativa diferente a las encontradas en el mercados. El 2015 no piensa quedarse atr'as y traer'a la materializaci'on de propuestas hechas el a~no pasado.
En 2014, Mozilla y Panasonic se comprometieron a desarrollar un Smart TV con Firefox OS. No se volvi'o a hablar de este tema en el resto del a~no, hasta que se mostr'o un prototipo de la interfaz en un evento interno de Mozilla. En el 'ultimo CES2015 de esta semana Panasonic confirm'o la disponibilidad de televisores inteligentes 4K con Firefox OS en este a~no, as'i que pronto podremos adquirir nuestra TV con este sistema operativo.
Pero en 2014 tambi'en supimos de una compa~n'ia independiente, Matchstick, propuso un equipo que se conectaba a la TV y que, ejecutando Firefox OS iba a compartir pantallas v'ia WiFi desde otros dispositivos hacia la TV. La propuesta se hizo p'ublica en Kickstarter, donde consigui'o financiaci'on, y al principio de 2015 se deber'an enviar los primeros equipos Matchstick a todo el mundo.
Firefox OS nace como un sistema operativo para smartphones de gama baja, con ese mercado espec'ifico. Sin embargo en los 'ultimos meses del 2014 empezaron a surgir ciertos proyectos que usaban Firefox OS de manera distinta, aplic'andolo en campos como la electr'onica, el hardware abierto y el Internet de las cosas, lo cual resulta muy raro.
En el Mozilla Festival se mostraron dispositivos de Raspberry Pi ejecutando una versi'on de Firefox OS. Raspberry Pi es un hardware abierto que se utiliza para leer sensores, controlar motores o leds y ense~nar algo de programaci'on b'asica. Queda mucho por hacer en 2015 para hacer totalmente viable esta adaptaci'on.
Tambi'en hay avances es en aplicaciones de Firefox OS en el Internet de las cosas. En un pulso pasado hablamos del Open Web Board, un hardware desarrollado en Jap'on y presentado junto a un entorno de desarrollo para programar ciertas operaciones entre circuitos de una casa.
En el video publicado al principio del art'iculo, se pueden apreciar conceptos de Firefox OS en relojes inteligentes (smartwatch) y lavadoras.
Por all'i sabemos de otros experimentos en Europa (Gonzo, de Telenor) y Am'erica Latina (FoxXapp) que van por la misma l'inea; as'i que veremos muchas novedades alrededor de este tema el pr'oximo a~no.
Y bueno, no nos hemos olvidado que Firefox OS es un sistema para smartphones. Es casi seguro que veremos nuevos celulares en el Mobile Web Congress de Barcelona que va a ser en Marzo de 2015. All'i de seguro que veremos la versi'on 2 de Firefox OS completamente terminado.
Nos gust'o mucho la aplicaci'on de c'amara porque permite ver una vista preliminar de fotos y videos tomados, una cuadr'icula para la tomas y la opci'on de enfoque.
No nos termina de convencer la nueva interfaz de la versi'on 2: los botones son muy grandes, hay 3 por fila, pero lo 'unico que nos consuela es que se puede cambiar el tama~no de los iconos y la cantidad por l'inea desde la aplicaci'on de ajustes.
Tambi'en ahora va a existir la posibilidad de navegar sin importar lo que estamos haciendo en alguna aplicaci'on, s'olo necesitamos pulsar sobre la barra de notificaciones y podremos navegar en Internet.
Creemos que valorar'iamos m'as si se pueden hacer videollamadas desde el m'ovil con Firefox Hello. Ah, claro, Firefox OS 2 ya se ejecuta sobre algunas tablets, las cuales esperemos ya se puedan adquirir en 2015.
Por 'ultimo, y no menos importante, en este nuevo Firefox OS debutan las Firefox Accounts, con lo cual podremos sincronizar datos del historial de navegaci'on y ubicar nuestro equipo v'ia Internet. Esto va a volver a Mozilla un proveedor de servicios en la nube.
Esperamos que el 2015 para Firefox OS y en general para Mozilla, sea un a~no bueno en todos los sentidos.
Fuente: Mozilla Hispano
http://firefoxmania.uci.cu/firefox-os-en-2015-mas-dispositivos-y-sorpresas/
|
David Rajchenbach Teller: Je Suis Charlie, but I Am Vigilant |
(This text has initially been written for the French-speaking Mozilla Community. Most members of the community haven’t had a chance to review or sign it yet.)
I am Charlie. Some of us grew up with Cabu’s children cartoons or Wolinkski’s willies. Some of us laughed at Charb’s cover pages, others much less, and yet others had never even heard of Charlie Hebdo. Despite our differences, from the bottom of our heart, we are with those who defend Free Speech, the right to discuss, draw, make laugh or cringe.
I am a Cop. Some among us work directly with law enforcement, or ensure the online safety of individuals or organizations. Some of us make their voice heard when legal or executive powers around the world decide that security, convenience or economic interests matter more than the rights of users. All, we salute the police officers murdered or wounded these last few days as they attempted to save innocents.
I am Jew, or Muslim, or Anything else. Some among us are Jew, or Muslim, or Christian, or anything else, and, frankly, most of us don’t care who is what. All, we are horrified that, in the 21st century, extremists may still decide to murder innocents, solely because they might be Jew, and because they had decided to go the grocery store. All, we are appalled that, in the 21st century, extremists may still decide to attack innocents, just because they might be Muslems, through threats, physical violence or through their places of cult. All, we are shocked whenever opportunists praise murders or violence, or call for hatred or the ones or the others.
I am Collateral. Before we are the Mozilla Community, we are a community of individuals. Any one of us could have been at the front desk of this building, or on the path of that car, hostage or collateral kill of the assassins. Our minute of silence is for the anonymous ones, too.
I am Vigilant. Some of us believe that strong and immediate measures must be taken. Others prefer to wait until the emotion has passed before we can think of an appropriate response. All, we wait to see how the attacks of January 7th and January 9th 2015 will change our society. All, we remain vigilant, to make sure that, on top of the blood of the dead, our society does not choose to sacrifice Human Rights, Free Speech and Privacy, in the name of a securitarian ideology or opportunistic economical interests.
I am the French-speaking Mozilla Community.
Text edited by myself. List of signatures of the French version.
https://dutherenverseauborddelatable.wordpress.com/2015/01/13/je-suis-charlie-but-i-am-vigilant/
|
Mozilla Release Management Team: Firefox 2015 release schedule |
A final 2015 schedule for Firefox (Desktop, Mobile and ESR) has been defined.
Release owners can be found on the Mozilla wiki but might change during the 2015 year.
Firefox Version | Release date |
---|---|
Firefox 43 | 2015-12-15 |
Firefox 42 | 2015-11-03 |
Firefox 41 | 2015-09-22 |
Firefox 40 | 2015-08-11 |
Firefox 39 | 2015-06-30 |
Firefox 38 (ESR base) | 2015-05-19 |
Firefox 37 | 2015-04-07 |
Firefox 36 | 2015-02-24 |
Firefox 35 | 2015-01-13 |
As usual, Desktop, Mobile and ESR are going to be released on the same day.
A calendar containing only the date of the release and the merge is available under XML, iCal and HTML.
In parallel, the detailed calendar is published under various forms: XML, iCal and HTML.
http://release.mozilla.org/planning/2015/01/13/release-schedule.html
|
Daniel Stenberg: My first year at Mozilla |
January 13th 2014 I started my first day at Mozilla. One year ago exactly today.
It still feels like it was just a very short while ago and I keep having this sense of being a beginner at the company, in the source tree and all over.
One year of networking code work that really at least during periods has not progressed as quickly as I would’ve wished for, and I’ve had some really hair-tearing problems and challenges that have taken me sweat and tears to get through. But I am getting through and I’m enjoying every (oh well, let’s say almost every) moment.
During the year I’ve had the chance to meetup with my team mates twice (in Paris and in Portland) and I’ve managed to attend one IETF (in London) and two special HTTP2 design meetings (in London and NYC).
openhub.net counts 47 commits by me in Firefox and that feels like counting high. bugzilla has tracked activity by me in 107 bug reports through the year.
I’ve barely started. I’ll spend the next year as well improving Firefox networking, hopefully with a higher turnout this year. (I don’t mean to make this sound as if Firefox networking is just me, I’m just speaking for my particular part of the networking team and effort and I let the others speak for themselves!)
Onwards and upwards!
http://daniel.haxx.se/blog/2015/01/13/my-first-year-at-mozilla/
|
Daniel Stenberg: My table tennis racket sized phone |
I upgraded my Nexus 5 to a Nexus 6 the other day. It is a biiiig phone, and just to show you how big I made a little picture showing all my Android phones so far using the correct relative sizes. It certainly isn’t very far away from a table tennis racket in size now. My Android track record so far goes like this: HTC Magic, HTC Desire HD, Nexus 4, Nexus 5 and now Nexus 6.
As shown, this latest step is probably the biggest relative size change in a single go. If the next step would be as big, imagine the size that would require! (While you think about that, I’ve already done the math: the 6 is 159.3 mm tall, 15.5% taller than the 5’s’ 137.9mm, so adding 15.5% to the Nexus 6 ends up at 184 – only 16 mm shorter than a Nexus 7 in portrait mode… I don’t think I could handle that!)
After the initial size shock, I’m enjoying the large size. It is a bit of a clunker to cram down into my left front-side jeans pocket where I’m used to carry around my device. It is still doable, but not as easy as before and it easily get uncomfortable when sitting down. I guess I need to sit less or change my habit somehow.
This largest phone ever ironically switched to the smallest SIM card size so my micro-SIM had to be replaced with a nano-SIM.
Not a single non-Google app got installed in my new device in the process. I strongly suspect it was that “touch the back of another device to copy from” thing that broke it because it didn’t work at all – and when it failed, it did not offer me to restore a copy from backup which I later learned it does if I skip the touch-back step. I ended up manually re-installing my additional 100 or so apps…
My daughter then switched from her Nexus 4 to my (by then) clean-wiped 5. For her, we skipped that broken back-touch process and she got a nice backup from the 4 restored onto the 5. But she got another nasty surprise: basically over half of her contacts were just gone when she opened the contacts app on the 5, so we had to manually go through the contact list on the old device and re-add them into the new one. The way we did (not even do) it in the 90s…
The Android device installation (and data transfer) process is not perfect yet. Although my brother says he did his two upgrades perfectly smoothly…
http://daniel.haxx.se/blog/2015/01/13/my-table-tennis-racket-sized-phone/
|
Ian Bicking: Being A Manager Is Lonely |
Management is new for me. I have spent a lot of time focusing on the craft of programming, now I focus on the people who focus on the craft of programming.
During the fifteen years I’ve been participating in something I’ll call a developer community, I’ve seen a lot of progress. Sometimes we wax nostalgic with an assertion that no progress has been made… but progress has been made. We, as professionals, hobbyists, as passionate practitioners understand much more about how to test, design, package, distribute, collaborate around code. And just about how to talk about it all.
I am a firm believer that much of that progress is due to the internet. There were technological advancements, sure. And there have been books teaching practice. But that’s not enough. There were incredible ideas about programming in the 70s! But there wasn’t the infrastructure to help developers assimilate those ideas.
I put more weight on people learning than on people being taught. If the internet was just a good medium for information dispersal — a better kind of book — then that is nice, but not transformational. The internet is more than that: it’s a place to discuss, and disagree, and watch others discussing. You can be provocative, and then step back and take on a more conservative opinion – a transformation most people would be too shy to commit to print. (As if substantial portion of people have ever had the option to consider what they want to commit to print!)
I think a debate is an opportunity; seldom an opportunity to convince anyone else of what you think, but a chance to understand why you think what you do, to come to a more mature understanding, and maybe create a framework for future changes of opinion. This is why I bristle at the phrase “just choose the right tool for the job” – this phrase is an attempt to shut down the discussion about what the right tool for the job is!
This is a long digression, but I am nostalgic for how I grew into my profession. Nostalgic because now I cannot have this. I cannot discuss my job. I cannot debate the details. I cannot tell anecdotes to elucidate a point. I cannot discuss the policies I am asked to implement – the institutional instructions applied to me and through me. I can only attempt to process my experiences in isolation.
And there are good reasons for this! While this makes me sad, and though I still question if there is not another way, there are very good reasons why I cannot talk about my work. I am in a leadership position, even if only a modest and subordinate leader. There is a great deal of potential for collateral damage in what I say, especially if I talk about the things I am thinking most about. I think most about the tensions in my company, interpreting the motivations of the leadership in the company, I think about the fears I sense in my reports, the unspoken tensions about what is done, expected, aspired to. I can discuss this with the individuals involved, but they are the furthest thing from a disinterested party, and often not in a place to develop collaborative wisdom.
This is perhaps unfair. I work with very thoughtful people. Our work is grounded in a shared mission, which is a powerful thing. But it’s not enough.
Are we, as a community of managers (is there such a thing?) becoming better? Yes, some. There are management consultants and books and other material about management, and there is value in that. But it is not a discussion, it is not easy to assimilate. I don’t get to interact with a community of peers.
On the topic of learning to manage, I have listened to many episodes of Manager Tools now. I’ve learned a lot, and it’s helped me, even if they are more authoritarian than makes me comfortable. I’m writing this now after listening to a two part series: Welcome To They: Professional Subordination and Part 2.
The message in these podcasts is: it is your responsibility as a manager to support the company’s decisions. Not just to execute on them, but to support them, to communicate that support, and if you disagree then you must hide that disagreement in the service of the company. You can disagree up — though even that is fraught with danger — but you can’t disagree down. You must hold yourself apart from your team, putting a wall between you and your team. To your team you are the company, not a peer.
There is a logical consistency to the argument. There is wisdom in it. The impact of complaints filtering up is much different than the impact of complaints filtering down. In some sense as a manager you must manufacture your own consensus for decisions that you cannot affect. You are probably doing your reports a favor by positively communicating decisions, as they will be doing themselves a favor by positively engaging with those decisions. But their advice is clear: if you are asked your opinion, you must agree with the decision, maybe stoically, but you must agree, not just concede. You must speak for the company, not for yourself.
Fuck. Why would I want to sign up for this? The dictate they are giving me is literally making me sad. If it didn’t make any sense then I might feel annoyed. If I thought it represented values I did not share then I might feel angry. But I get it, and so it makes me sad.
Still, I believe in progress. I believe we can do better than we have in the past. I believe in unexplored paths, in options we aren’t ready to compare to present convention, in new ways of thinking about problems that break out of current categories. All this in management too – which is to say, new ways to form and coordinate organizations. I think those ideas are out there. But damn, I don’t know what they are, and I don’t know how to find out, because I don’t know how to talk about what we do and that’s the only place where I know how to start.
http://www.ianbicking.org/blog/2015/01/being-a-manager-is-lonely.html
|
Joshua Cranmer: Why email is hard, part 8: why email security failed |
At the end of the last part in this series, I posed the question, "Which email security protocol is most popular?" The answer to the question is actually neither S/MIME nor PGP, but a third protocol, DKIM. I haven't brought up DKIM until now because DKIM doesn't try to secure email in the same vein as S/MIME or PGP, but I still consider it relevant to discussing email security.
Unquestionably, DKIM is the only security protocol for email that can be considered successful. There are perhaps 4 billion active email addresses [1]. Of these, about 1-2 billion use DKIM. In contrast, S/MIME can count a few million users, and PGP at best a few hundred thousand. No other security protocols have really caught on past these three. Why did DKIM succeed where the others fail?
DKIM's success stems from its relatively narrow focus. It is nothing more than a cryptographic signature of the message body and a smattering of headers, and is itself stuck in the DKIM-Signature header. It is meant to be applied to messages only on outgoing servers and read and processed at the recipient mail server—it completely bypasses clients. That it bypasses clients allows it to solve the problem of key discovery and key management very easily (public keys are stored in DNS, which is already a key part of mail delivery), and its role in spam filtering is strong motivation to get it implemented quickly (it is 7 years old as of this writing). It's also simple: this one paragraph description is basically all you need to know [2].
The failure of S/MIME and PGP to see large deployment is certainly a large topic of discussion on myriads of cryptography enthusiast mailing lists, which often like to partake in propositions of new end-to-end encryption of email paradigms, such as the recent DIME proposal. Quite frankly, all of these solutions suffer broadly from at least the same 5 fundamental weaknesses, and I see it unlikely that a protocol will come about that can fix these weaknesses well enough to become successful.
The first weakness, and one I've harped about many times already, is UI. Most email security UI is abysmal and generally at best usable only by enthusiasts. At least some of this is endemic to security: while it mean seem obvious how to convey what an email signature or an encrypted email signifies, how do you convey the distinctions between sign-and-encrypt, encrypt-and-sign, or an S/MIME triple wrap? The Web of Trust model used by PGP (and many other proposals) is even worse, in that inherently requires users to do other actions out-of-band of email to work properly.
Trust is the second weakness. Consider that, for all intents and purposes, the email address is the unique identifier on the Internet. By extension, that implies that a lot of services are ultimately predicated on the notion that the ability to receive and respond to an email is a sufficient means to identify an individual. However, the entire purpose of secure email, or at least of end-to-end encryption, is subtly based on the fact that other people in fact have access to your mailbox, thus destroying the most natural ways to build trust models on the Internet. The quest for anonymity or privacy also renders untenable many other plausible ways to establish trust (e.g., phone verification or government-issued ID cards).
Key discovery is another weakness, although it's arguably the easiest one to solve. If you try to keep discovery independent of trust, the problem of key discovery is merely picking a protocol to publish and another one to find keys. Some of these already exist: PGP key servers, for example, or using DANE to publish S/MIME or PGP keys.
Key management, on the other hand, is a more troubling weakness. S/MIME, for example, basically works without issue if you have a certificate, but managing to get an S/MIME certificate is a daunting task (necessitated, in part, by its trust model—see how these issues all intertwine?). This is also where it's easy to say that webmail is an unsolvable problem, but on further reflection, I'm not sure I agree with that statement anymore. One solution is just storing the private key with the webmail provider (you're trusting them as an email client, after all), but it's also not impossible to imagine using phones or flash drives as keystores. Other key management factors are more difficult to solve: people who lose their private keys or key rollover create thorny issues. There is also the difficulty of managing user expectations: if I forget my password to most sites (even my email provider), I can usually get it reset somehow, but when a private key is lost, the user is totally and completely out of luck.
Of course, there is one glaring and almost completely insurmountable problem. Encrypted email fundamentally precludes certain features that we have come to take for granted. The lesser known is server-side search and filtration. While there exist some mechanisms to do search on encrypted text, those mechanisms rely on the fact that you can manipulate the text to change the message, destroying the integrity feature of secure email. They also tend to be fairly expensive. It's easy to just say "who needs server-side stuff?", but the contingent of people who do email on smartphones would not be happy to have to pay the transfer rates to download all the messages in their folder just to find one little email, nor the energy costs of doing it on the phone. And those who have really large folders—Fastmail has a design point of 1,000,000 in a single folder—would still prefer to not have to transfer all their mail even on desktops.
The more well-known feature that would disappear is spam filtration. Consider that 90% of all email is spam, and if you think your spam folder is too slim for that to be true, it's because your spam folder only contains messages that your email provider wasn't sure were spam. The loss of server-side spam filtering would dramatically increase the cost of spam (a 10% reduction in efficiency would double the amount of server storage, per my calculations), and client-side spam filtering is quite literally too slow [3] and too costly (remember smartphones? Imagine having your email take 10 times as much energy and bandwidth) to be a tenable option. And privacy or anonymity tends to be an invitation to abuse (cf. Tor and Wikipedia). Proposed solutions to the spam problem are so common that there is a checklist containing most of the objections.
When you consider all of those weaknesses, it is easy to be pessimistic about the possibility of wide deployment of powerful email security solutions. The strongest future—all email is encrypted, including metadata—is probably impossible or at least woefully impractical. That said, if you weaken some of the assumptions (say, don't desire all or most traffic to be encrypted), then solutions seem possible if difficult.
This concludes my discussion of email security, at least until things change for the better. I don't have a topic for the next part in this series picked out (this part actually concludes the set I knew I wanted to discuss when I started), although OAuth and DMARC are two topics that have been bugging me enough recently to consider writing about. They also have the unfortunate side effect of being things likely to see changes in the near future, unlike most of the topics I've discussed so far. But rest assured that I will find more difficulties in the email infrastructure to write about before long!
[1] All of these numbers are crude estimates and are accurate to only an order of magnitude. To justify my choices: I assume 1 email address per Internet user (this overestimates the developing world and underestimates the developed world). The largest webmail providers have given numbers that claim to be 1 billion active accounts between them, and all of them use DKIM. S/MIME is guessed by assuming that any smartcard deployment supports S/MIME, and noting that the US Department of Defense and Estonia's digital ID project are both heavy users of such smartcards. PGP is estimated from the size of the strong set and old numbers on the reachable set from the core Web of Trust.
[2] Ever since last April, it's become impossible to mention DKIM without referring to DMARC, as a result of Yahoo's controversial DMARC policy. A proper discussion of DMARC (and why what Yahoo did was controversial) requires explaining the mail transmission architecture and spam, however, so I'll defer that to a later post. It's also possible that changes in this space could happen within the next year.
[3] According to a former GMail spam employee, if it takes you as long as three minutes to calculate reputation, the spammer wins.
http://quetzalcoatal.blogspot.com/2015/01/why-email-is-hard-part-8-why-email.html
|