Eitan Isaacson: Am I Vision Impaired? Who Wants to Know? |
There has been discussion recently if websites should have the ability to detect whether a visitor is using a screen reader. This was sparked by the most recent WebAIM survey that highlights a clear majority of users would indeed be comfortable divulging that information to sites.
This is not a new topic, there is a spec in the works that attempts to balance privacy, functionality and user experience. This is also a dilemma we have as implementers and have discussed this extensively in bug reports. Even my esteemed colleague Marco put down his thoughts on the topic.
I have mostly felt confusion about this question. Not about the privacy or usability concerns, but really about the semantics. I think the question “do you feel comfortable disclosing your assistive technology to the web” could be phrased in a dozen ways, each time exposing bias and assumptions about the web and computing.
The prevailing assumption is that the World Wide Web is a geo-spatial reality loosely based on the physical world. Just like a geographical site, a site on the Web resides in a specific locality. The user is a “visitor” to the site. The “site” metaphor runs very deep. When I was first shown the Web, in 1994 I remember visiting the Louvre, seeing the Mona Lisa and signing a guest book. In this world, the browser is a vehicle that takes you to distant and exotic locations. Their names suggested it: Internet Explorer, Netscape Navigator, Safari, Galeon, and the imperialistic Konquerer.
This paradigm runs deep, even though we use the Web in a very different way today, and a new mental model of the Web is prevailing.
When you check your mail on Gmail, or catch up on Twitter, you are using an application. Your browser is just a shell. In your mind, you are not virtually traveling to Silicon Valley to visit a site. You feel ownership over those applications. It is “my” twitter feed, that is “my” inbox. You will not sign a guest book. Just look at the outcry every time Facebook redesigns its timeline, or after Google does some visual refresh to its apps. Users get irate because they see this as an encroachment on their space. They were happy, and then some ambitious redesign is forcing them to get reacquainted with something they thought was theirs. That is why market-speak invented the “cloud”, which ambiguates the geography of websites and reinforces the perception that the user should stop worrying and love the data centers behind their daily life.
Depending how you see the web at any given moment may change how you view the question of assistive technology detection.
If you are applying for a loan online, you are virtually traveling to a loan office or bank. Whether you have a disability or not is none of their business, and if they take note of it while considering your application for a loan that would be a big problem (and probably illegal). In other words, you are traveling to a site. Just like you would put on a pair of pants or skirt before leaving the house, you expect your browser to be a trusty vehicle that will protect you from the dangers and exposure in the Wide World of the Web.
On the other hand, you may use Microsoft’s Office 365 every day for your job or studies. It really is just an office suite not unlike the one you used to install on your computer. In your mind, you are not traveling to Redmond to use it. It is just there, and they don’t want you to think about it any further. The local software you run has the capability to optimize itself for its environment and provide a better experience for screen reader users, and there is no reason why you would not expect that from your new “cloud office”.
The question of AT detection is really more about perceived privacy than actual privacy. If you had a smartphone in the last 5 years, you probably got frustrated with the mobile version of some website and downloaded the native version from the app store. Guess what? You just waived your privacy and disclosed any kind of AT usage to the app and, in turn, to the website you frequent. This whole “the Web is the platform” thing? It is a two way street. There is no such thing as an exclusively local app anymore, they are all web-enabled. When you install and run a “native” app, you can go back to that original mental model of the web and consider your actions as visiting a site. You may as well sign their guest book while you’re at it.
In fact, “local” apps today on iOS or Android may politely ask you to use your camera or access your address book, but profile your physical impairments? They don’t need special permission for that. If you installed it, they already know.
In that sense, the proposed IndieUI spec offers more privacy than is currently afforded on “native” platforms by explicitly asking the user whether to disclose that information.
I have no simple answers. Besides being an implementer, I don’t have enough of a stake in this. But I would like to emphasize a cliche that I hear over and over, and have finally embraced: “the Web is the platform”. The web is no longer an excursion and the browser is not a vehicle. If we truly aspire to make the web a first class platform, we need to provide the tools and capabilities that have been taken for granted on legacy platforms. But this time, we can do it better.
http://blog.monotonous.org/2014/03/17/am-i-vision-impaired-who-wants-to-know/
|
Benjamin Kerensa: Sponsor Debconf14 |
Debconf14 is just around the corner and although we are making progress on securing sponsorships there is still a lot of progress that needs to be made in order to reach our goal. I’m writing this blog post to drum up some more sponsors. So if you are reading this and are a decision maker at your company or know a decision maker and are interested in supporting Debconf14, then please check out the Debconf14 Sponsor Brochure and if still interested then reach out to us at sponsors@debconf.org. I think it goes without saying that we would love to fill some of the top sponsorship tiers.
I hope to see you in August in Portland, OR for Debconf14!
DebConf is the annual Debian developers meeting. An event filled with discussions, workshops and coding parties – all of them highly technical in nature. DebConf14, the 15th Debian Conference, will be held Portland, Oregon, USA, from August 23rd to 31st, 2014 at Portland State University. For more detailed logistical information about attending, including what to bring, and directions, please visit the DebConf14 wiki.
This year’s schedule of events will be exciting, productive and fun. As in previous years (final report 2013 [PDF]), DebConf14 features speakers from around the world. Past Debian Conferences have been extremely beneficial for developing key components of the Debian system, infrastructure and community. Next year will surely continue that tradition.
http://feedproxy.google.com/~r/BenjaminKerensaDotComMozilla/~3/Po5y13tCbc8/sponsor-debconf14
|
Carla Casilli: A foundational badge system design |
The last two years or so have found me investigating and contemplating many different types of badge systems. Along the way I’ve been wrestling with considerations of badge types, potential taxonomies, and conceptual approaches; feeling my way around a variety of assessment types including summative, formative and transformative. Working in badge system design rewards a person with a continuing sense of adventure and opportunity.
A badge system structure for many
After much thought and many contemplative examinations, I’ve developed an archetypal badge system structure that I’m happy to recommend to the open badges community. Here are the many reasons why I think you’ll want to implement it.
Introducing the 3 Part Badge System
This badge structure is the one that I developed for the Mozilla badge system that we are in the process of building. I’m calling it the 3 Part Badge System (3PBS). It’s composed of three interlocking parts and those three parts create a flexible structure that ensures feedback loops and allows the system to grow and evolve. Or breathe. And by breathe, I mean it allows the system to flex and bow as badges are added to it.
While some community member organizations have expressed a desire for a strict, locked-down, top-down badge system to—in their words—guarantee rigor (and you already know my thoughts on this), this system supports that request but is also designed to include active participation and badge creation from the bottom up. I’d say it’s the best of both worlds but then I’d be leaving out the middle-out capacity of this system. So in reality, it’s the best of all possible worlds.
This approach is a vote for interculturism—or the intermingling and appreciation of cultures—in badge systems. Its strength arises from the continuous periodic review of all of the badges, in particular the team / product badges as well as the individual / community badges.
Don’t tell me, show me
It’s easier to talk about this system with some visuals so I’ve designed some to help explain it. Here is the basic 3 part structure: Part 1) company / organization badges; Part 2) team / product badges; and Part 3) individual / community badges. This approach avoids a monocultural hegemony.
Part 1: Company / organization badges
Many companies and organizations have specific needs and concerns about branding. This system addresses those concerns directly. In this proposed system, an advisory group defines, creates, and governs the highest level of badges—the company / organization badges—providing control over the all-important corporate or academic brand. While not all systems have such strict brand requirements, this approach allows for conceptual levels of badges to be created while interacting in organic and meaningful ways with other types of badges. And advisory group creates and vets these badges based on the organization’s foundational principles and values. The company/organization badges transmit the most important values of an institution and they address organizational concerns regarding brand value and perceived rigor.
Part 2: Team / product badges
Few organizations exist that do not have some middle layer accomplishing the bulk of the work of the organization; the 3PBS proposal recognizes the team / product groups as necessary and important partners. In addition to acknowledging the contributions of this collection of folks, 3PBS engenders them with the ability to respond to their public through badges. Different teams and products groups can clearly and unequivocally communicate their closely held qualities and values through the creation and issuance of their own badges. These badges are created entirely independently of the Part 1 company / organization badges. In a bit we’ll discuss how the team / product badges play a role in influencing other aspects of the 3PBS.
Part 3: Individual / community badges
So your organization is comprised only of management and teams? Of course not. The 3PBS honors the folks who are on the front lines of any organization—the community—by empowering them to define their values internally as well as externally. These badges operate outside the requirements that define the Company/organization badges and the Team/product badges. The community badges can be created by anyone within the community and do not hew to the visual requirements of the other two subsystems. What this means is that an individual or community can create any types of badges they like. In other words, it provides the ability to publicly participate—to have a voice—in the system.
How the three different parts influence one another in the 3 Part Badge System
How do these parts interact? In order to communicate how these subsystems can affect each other, I’ve created some color based graphics. You’ve already seen the first one above that describes the initial system.
But first a little basic color theory to ground our understanding of how these subsystems work together to create a dynamic and powerful system. The basic 3 part structure graphic above uses what are known as primary colors, from the Red, Yellow, Blue color model. Centuries of art are based on these three colors in this color model. The following graphics further explore the RYB color model and move us into the world of secondary colors. Secondary colors result from the mixing of two primary colors: mixing red and yellow results in orange; mixing yellow and blue results in green; mixing blue and red results in purple. Now that we’ve established how the color theory used here works, we can see how the parts represented by these colors indicate intermixing and integration of badges.
Individual / community badges influence team / product badges
The 3PBS concept relies on badge development occurring at the individual and community level. By permitting and even encouraging community and individual level badging, the system can will continuously reform itself, adjusting badges upward in importance in the system. That’s not to say that any part of this system is superior to another, merely that these parts operate in different ways to different audiences. As I wrote in my last post, meaning is highly subjective and context-specific.
This graphic illustrates the team / product created and owned badges assimilating some badges from the individual / community created and owned badges. The graphic also indicates that the company / organization badges can be held separate from this influence—if so desired.
Periodic review by the team / product groups of the individual and community badges likely will reveal patterns of use and creation. These patterns are important data points worth examining closely. Through them the larger community reveals its values and aspirations. Consequently, a team or product group may choose to integrate certain individual / community badges into their own badge offerings. In this way a badge begins to operate as a recognized form of social currency, albeit a more specific or formalized currency. The result of this influencing nature? The team and product group badge subsystem refreshes itself by assimilating new areas of interest pulled directly from the larger, more comprehensive and possibly external community.
Team / product badges badges influence company / organization badges
Company and organization level badges operate in precisely the same way, although the advisory group who is responsible for this level of badge can look across both the team / product badges as well as the individual / community badges. That experience will look something like this in practice.
Periodic review of the team / product badges by the advisory group responsible for company and organization badges may reveal duplicates as well as patterns. Discussion between the advisory group and the teams responsible for those badges may indicate that a single standard badge is appropriate. Considering that teams and product group badges are created independently by those groups, apparent duplication among teams may not necessarily be a bad thing: context is all important in the development and earning of badges. That said, examination and hybridization of some badges from the team and product groups may create a stronger, more coherent set of company and organization level badges.
Individual / community badges influence company / organization badges
In addition to being able to examine and consider team and product level badges, the advisory group responsible for the company / organization badges can also find direct inspiration from individual and community created badges. Since there are few to no rules that govern the creation of the individual / community created and owned badges, insightful, dramatic, and wildly creative badges can occur at this level. Because they come through entirely unfiltered, those sorts of badges are precisely the type to encourage rethinking of the entirety of the 3PBS.
Here we see how the individual / community created and owned badges can significantly color the company / organization badges. Since the company / organization badges communicate universal values, it’s vital that those values remain valid and meaningful. Incorporating fresh thinking arising from individual and community badges can help to ensure that remains true.
Three parts, one whole
So, if we loop back to the original system, prior to the (color) interactions of one part to another, we can see how each part might ultimately influence one another. This is the big picture to share with interested parties who are curious as to how this might work.
So, that’s the 3 Part Badge System in a nutshell. Looking forward to hearing your thoughts.
—-
Much more soon. carla [at] badgealliance [dot] org
http://carlacasilli.wordpress.com/2014/03/17/a-foundational-badge-system-design/
|
Zack Weinberg: HTTP application layer integrity/authenticity guarantees |
Note: These are half-baked ideas I’ve been turning over in my head, and should not be taken all that seriously.
Best available practice for mutually authenticated Web services (that is, both the client and the server know who the other party is) goes like this: TLS provides channel confidentiality and integrity to both parties; an X.509 certificate (countersigned by some sort of CA) offers evidence that the server is whom the client expects it to be; all resources are served from https://
URLs, thus the channel’s integrity guarantee can be taken to apply to the content; the client identifies itself to the server with either a username and password, or a third-party identity voucher (OAuth, OpenID, etc), which is exchanged for a session cookie. Nobody can impersonate the server without either subverting a CA or stealing the server’s private key, but all of the client’s proffered credentials are bearer tokens: anyone who can read them can impersonate the client to the server, probably for an extended period. TLS’s channel confidentiality assures that no one in the middle can read the tokens, but there are an awful lot of ways they can leak at the endpoints. Security-conscious sites nowadays have been adding one-time passwords and/or computer-identifying secondary cookies, but the combination of session cookie and secondary cookie is still a bearer token (possibly you also have to masquerade the client’s IP address).
Here are some design requirements for a better scheme:
The cryptographic primitives we need for this look something like:
And here are some preliminary notes on how the protocol might work:
perhaps?)I know all of this is possible except maybe the dirt-cheap asymmetric MAC, but I don’t know what cryptographers would pick for the primitives. I’m also not sure what to do to make it interoperable with OpenID etc.
https://www.owlfolio.org/htmletc/http-application-layer-integrityauthenticity-guarantees/
|
Fr'ed'eric Harper: No speakers, no conferences. |
Creative Commons: http://j.mp/1kX36Bk
No speakers, no conferences. No one can argue against this, so it’s why I think speakers deserve more respect for their work. Don’t get me wrong, I had a lot of great experiences as a speaker, but I saw many situations that could have been avoided. In the end, it should lead to a better experiences for everyone: the speakers, the organizers, and the attendees.
I wrote a post on how to be a good attendee, now it’s about being a good organizer… Keep in mind that this post focuses on conferences, but everything applies to user groups or any other events with speakers.
I planned my schedule to be sure I was able to submit my talks proposal before the deadline for your call for speakers. I don’t think it’s fair to extend that time because you didn’t get as many proposals as you wanted. I may have been able to do other important work if I had known that the deadline would be extended.
If you took the time to ask me to submit a talk to your conference, it’s because you want me to speak at your event, no? First, be clear on what you are looking for exactly. If you don’t like the abstracts I submitted, let me know so we can work together to make it happens.
Your call for speakers is done, it’s now time to select them. Even if we are super excited to go to your event, we can’t block our calendars ad vitam aeternam just for you! We have meetings, work to get done, other conferences, and a personal life. The sooner we know if we are accepted (or not), the better. Also, if I’m not selected, I may find other stuff or conferences to go to.
I see this too often now: conferences favour international speakers over local ones. As the conference become more popular or bigger, it’s seems more prestigious to do so. Don’t forget the people that were there since the beginning, if they are good speakers, of course. You also have some local superstars, why not add them to the schedule?
I saw this too often with user groups: last minute organization, and promotion. Make the time, and effort from the speaker worth it, by making effort on your side to maximize his/her presence. You have a much higher chance to get a full room of attendees, if you start promoting the event at least one month before, than within the week before. There is nothing more annoying than speaking to an empty room.
I won’t add more meat to this point as it speaks for itself, and Remy Sharp did a great blog post on the topic. My friend Christian Heilmann too did a great post about speaking is sponsoring your event. Even better, why not pay the speaker for his time? In the end, would you work for free?
I know, you need to find sources of revenue, but giving a speaking slot to someone who pay for it, means you don’t value your audience. Isn’t it your role to be sure your attendees will have the best speakers with the more interesting topics out there? Select the speakers because of their talent, and/or the topics they will talk about, not the money they are willing to pay. With a policy like that you are more likely to get better speakers as potential sponsors need to pick them by talent, not just because they are available.
When I’m going to your conference, it’s to speak, but also to learn from others, and most important of all, to network. You should expect every speaker to be present before and after their talk. If you pay for my travel, and expense, don’t do it only for the day I’m speaking: I’ll be there for the whole conference. I would also appreciate that you get me in one day before so I can catch up with jet lag if it’s in a different timezone.
You may have a testing session the day before or in the morning, but there is nothing like testing right before your talk. I need time to plug my computer, test my remote, do an audio test, check my slides from the back of the room… A lot of things may have changed between the testing session, and the time of my talk. If there is any problem, we’ll have time to fix it.
If you gave me one hour for my presentation, don’t tell, for any reason, that I only have thirty minutes now. As a professional speaker, I built my materials just for your audience, and to get the most out of the time I’ll have. It’s not easy to change my content like that. Also, be sure that the previous speaker finishes on time and don’t cut me off before I finished my talk.
The conference is done; you are all tired, I know it’s a lot of work, I organized many events, user groups, and conferences. I still think it’s not done yet: take the time to follow up with your speakers. Thank them for their work, share the feedback you got, and let them know if you want them to be there next edition.
Again, all these points came back to one simple rule: respect your speakers.
--
No speakers, no conferences. is a post on Out of Comfort Zone from Fr'ed'eric Harper
Related posts:
|
Florian Scholz: Are we documented yet? |
Every wiki and documentation site fights against a big problem: Content outdates pretty fast as the software or the technologies evolve. This is more than true for MDN.
When we switched to the rapid release model, the way we document on MDN changed as well. A process for having release notes every six weeks has been set up by Jean-Yves Perrier (teoli) and our old tools like the "DocTracker", which gets us dev-doc-needed bugs per release, were helpful. However, writing detailed documentation for all these changes in six weeks, is no longer achievable. There are a lot of more changes in six weeks today than in six weeks two years ago.
In the old days, "yes, we are documented yet!", meant that the dev-doc-needed bug counter was near to zero for a given release.
I thought about building new tools that could answer how are doing today. Fortunately Kuma and KumaScript are open source and hackable, so that I was able to build new tools directly into MDN.
So, how can we measure if we are documented yet, today?
The MDN documentation team switched to concentrate on content topics rather than to document everything belonging to a release. Several Topic Drivers started to maintain specific parts of MDN.
If you look at a content section on MDN, you can definitely identify more "health indicators" than just the dev-doc-needed bug list. To make the state of the documentation visible, we started to build documentation status pages for sections on MDN.
Let's take the JavaScript documentation status as an example to see what we currently measure.
Quality indicators:
Besides the automatically created quality indication list, there is also room for sharing what else is needed in the content section. For example, for JavaScript, we need to clean up our documentation for ECMAScript 6 and some tutorials and guides are missing, too. That information should be available in the open and shared with people interested in writing on MDN. So, if you are asking what you can do for MDN, you should find answers there!
When localizing MDN, we are speaking of 11,000 pages. This is a huge task and not all docs are worth translating. So, with the idea of content sections, we are also looking into making smaller lists of pages in need of localization.
To keep track of the localization work, a localizer needs to know if a translation is available and whether it is up to date with the English source. So, for each documentation status page there is also a localization status page.
Here is an example that lists the
translation status of the JavaScript section for Japanese:
Also have a look of the translation overview pages, e.g.
all sections for Japanese.
With these status pages for both, English documentation and the translations, a lot of work becomes visible. If you have expertise in one or more of the topic sections, please have a look and help us with documenting. If you are a localizer and would like to have a status page for your language, feel free to add one or contact us to help you.
In general you can find us on irc.mozilla.org in #mdn.
You can also reach me at fscholz@moco or Jean-Yves at jperrier@moco.
The first iteration of these tools is now finished. The different documentation status pages we have right now, are monitoring around 40% of all MDN pages. We are looking into increasing that number.
Let's document the Open Web!
How about a nice page like arewefastyet.com or areweslimyet.com? I have built a documentation status API. Anyone wants to collect data and make that happen? :-) For instance, a JSON for the JavaScript documentation status, can be found here.
|
William Lachance: Upcoming travels to Japan and Taiwan |
Just a quick note that I’ll shortly be travelling from the frozen land of Montreal, Canada to Japan and Taiwan over the next week, with no particular agenda other than to explore and meet people. If any Mozillians are interested in meeting up for food or drink, and discussion of FirefoxOS performance, Eideticker, entropy or anything else… feel free to contact me at wrlach@gmail.com.
Exact itinerary:
I will also be in Taipei the week of the March 31st, though I expect most of my time to be occupied with discussions/activities inside the Taipei office about FirefoxOS performance matters (the Firefox performance team is having a work week there, and I’m tagging along to talk about / hack on Eideticker and other automation stuff).
|
Robert O'Callahan: Maokong |
I've been in Taiwan for a week. Last Sunday, almost immediately after checking into the hotel I went with a number of Mozilla people to Maokong Gondala and did a short hike around Maokong itself, to Yinhe Cave and back. I really enjoyed this trip. The gondala ride is quite long, and pretty. Maokong itself is devoted to the cultivation and vending of tea. I haven't seen a real tea plantation before, and I also got to see rice paddies up close, so this was quite interesting --- I'm fascinated by agricultural culture, which dominated so many people's lives for such a long time. Yinhe Cave is a cave behind a waterfall which has been fitted out as a Buddhist temple; quite an amazing spot. We capped off the afternoon by having dinner at a tea house in Maokong. A lot of the food was tea-themed --- deep-fried tea leaves, tea-flavoured omelette, etc. It was excellent, a really great destination for visitors to Taipei.
|
Priyanka Nag: MDN Workweek, Paris |
The Super-heroes of Mozilla |
At work....in Paris office |
Time for some French wine |
http://priyankaivy.blogspot.com/2014/03/mdn-workweek-paris.html
|
Ron Piovesan: Finding the market behind the numbers |
My son like puzzles. He’s only two years old but he loves to find the most complex puzzle in the house, take it apart, and then out it back together. To see a young child wanting to tackle a series of complex puzzle problems is an inspiration to watch as a parent.
It is also inspirational to watch as a business development professional.
I’m not much good at the types of puzzles my son likes to tackle, but I enjoy figuring out another type of puzzle, markets.
Sometimes those puzzles are technology-related; figuring out who the main vendors are in a market, how their products differentiate, and trying to boil it all down to understand who is disrupting whom.
You figure out these technology markets because you need to understand who to partner with, who to maybe buy, or who is or may become a competitor. Or maybe the right answer is to just leave that market alone.
Then there are global puzzles, which are the ones I’ve recently been spending a lot of time. You can read reports, you can talk to experts, but it isn’t until you go out into the various markets and talk to potential customers and partners, do you begin to understand a market.
Reports have numbers, markets have nuances. You need to be actively investigating a market, technology or global, to properly understand the nuances.
http://ronpiovesan.com/2014/03/15/finding-the-market-behind-the-numbers/
|
Kartikaya Gupta: The Project Premortem |
The procedure is simple: when the organization has almost come to an important decision but has not formally committed itself, [Gary] Klein proposes gathering for a brief session a group of individuals who are knowledgeable about the decision. The premise of the session is a short speech: "Imagine that we are a year into the future. We implemented the plan as it now exists. The outcome was a disaster. Please take 5 to 10 minutes to write a brief history of that disaster.
|
Kartikaya Gupta: The Project Premortem |
The procedure is simple: when the organization has almost come to an important decision but has not formally committed itself, [Gary] Klein proposes gathering for a brief session a group of individuals who are knowledgeable about the decision. The premise of the session is a short speech: "Imagine that we are a year into the future. We implemented the plan as it now exists. The outcome was a disaster. Please take 5 to 10 minutes to write a brief history of that disaster.
(From Thinking, Fast and Slow, by Daniel Kahneman)
When I first read about this, it immediately struck me as a very simple but powerful way to mitigate failure. I was interested in trying it out, so I wrote a pre-mortem story for Firefox OS. The thing I wrote turned out to be more like something a journalist would write in some internet rag 5 years from now, but I found it very illuminating to go through the exercise and think of different ways in which we could fail, and to isolate the one I thought most likely.
In fact, I would really like to encourage more people to go through this exercise and have everybody post their results somewhere. I would love to read about what keeps you up at night with respect to the project, what you think our weak points are, and what we need to watch out for. By understanding each others' worries and fears, I feel that we can do a better job of accommodating them in our day-to-day work, and work together more cohesively to Get The Job Done (TM).
Please comment on this post if you are interested in trying this out. I would be very happy to coordinate stuff so that people write out their thoughts and submit them, and we post all the results together (even anonymously if so desired). That way nobody is biased by anybody else's writing.
|
William Lachance: It’s all about the entropy |
[ For more information on the Eideticker software I'm referring to, see this entry ]
So recently I’ve been exploring new and different methods of measuring things that we care about on FirefoxOS — like startup time or amount of checkerboarding. With Android, where we have a mostly clean signal, these measurements were pretty straightforward. Want to measure startup times? Just capture a video of Firefox starting, then compare the frames pixel by pixel to see how much they differ. When the pixels aren’t that different anymore, we’re “done”. Likewise, to measure checkerboarding we just calculated the areas of the screen where things were not completely drawn yet, frame-by-frame.
On FirefoxOS, where we’re using a camera to measure these things, it has not been so simple. I’ve already discussed this with respect to startup time in a previous post. One of the ideas I talk about there is “entropy” (or the amount of unique information in the frame). It turns out that this is a pretty deep concept, and is useful for even more things than I thought of at the time. Since this is probably a concept that people are going to be thinking/talking about for a while, it’s worth going into a little more detail about the math behind it.
The wikipedia article on information theoretic entropy is a pretty good introduction. You should read it. It all boils down to this formula:
You can see this section of the wikipedia article (and the various articles that it links to) if you want to break down where that comes from, but the short answer is that given a set of random samples, the more different values there are, the higher the entropy will be. Look at it from a probabilistic point of view: if you take a random set of data and want to make predictions on what future data will look like. If it is highly random, it will be harder to predict what comes next. Conversely, if it is more uniform it is easier to predict what form it will take.
Another, possibly more accessible way of thinking about the entropy of a given set of data would be “how well would it compress?”. For example, a bitmap image with nothing but black in it could compress very well as there’s essentially only 1 piece of unique information in it repeated many times — the black pixel. On the other hand, a bitmap image of completely randomly generated pixels would probably compress very badly, as almost every pixel represents several dimensions of unique information. For all the statistics terminology, etc. that’s all the above formula is trying to say.
So we have a model of entropy, now what? For Eideticker, the question is — how can we break the frame data we’re gathering down into a form that’s amenable to this kind of analysis? The approach I took (on the recommendation of this article) was to create a histogram with 256 bins (representing the number of distinct possibilities in a black & white capture) out of all the pixels in the frame, then run the formula over that. The exact function I wound up using looks like this:
def _get_frame_entropy((i, capture, sobelized)):
frame = capture.get_frame(i, True).astype('float')
if sobelized:
frame = ndimage.median_filter(frame, 3)
dx = ndimage.sobel(frame, 0) # horizontal derivative
dy = ndimage.sobel(frame, 1) # vertical derivative
frame = numpy.hypot(dx, dy) # magnitude
frame *= 255.0 / numpy.max(frame) # normalize (Q&D)
histogram = numpy.histogram(frame, bins=256)[0]
histogram_length = sum(histogram)
samples_probability = [float(h) / histogram_length for h in histogram]
entropy = -sum([p * math.log(p, 2) for p in samples_probability if p != 0])
return entropy
The “sobelized” bit allows us to optionally convolve the frame with a sobel filter before running the entropy calculation, which removes most of the data in the capture except for the edges. This is especially useful for FirefoxOS, where the signal has quite a bit of random noise from ambient lighting that artificially inflate the entropy values even in places where there is little actual “information”.
This type of transformation often reveals very interesting information about what’s going on in an eideticker test. For example, take this video of the user panning down in the contacts app:
If you graph the entropies of the frame of the capture using the formula above you, you get a graph like this:
The Y axis represents entropy, as calculated by the code above. There is no inherently “right” value for this — it all depends on the application you’re testing and what you expect to see displayed on the screen. In general though, higher values are better as it indicates more frames of the capture are “complete”.
The region at the beginning where it is at about 5.0 represents the contacts app with a set of contacts fully displayed (at startup). The “flat” regions where the entropy is at roughly 4.25? Those are the areas where the app is “checkerboarding” (blanking out waiting for graphics or layout engine to draw contact information). Click through to the original and swipe over the graph to see what I mean.
It’s easy to see what a hypothetical ideal end state would be for this capture: a graph with a smooth entropy of about 5.0 (similar to the start state, where all contacts are fully drawn in). We can track our progress towards this goal (or our deviation from it), by watching the eideticker b2g dashboard and seeing if the summation of the entropy values for frames over the entire test increases or decreases over time. If we see it generally increase, that probably means we’re seeing less checkerboarding in the capture. If we see it decrease, that might mean we’re now seeing checkerboarding where we weren’t before.
It’s too early to say for sure, but over the past few days the trend has been positive:
(note that there were some problems in the way the tests were being run before, so results before the 12th should not be considered valid)
So one concept, at least two relevant metrics we can measure with it (startup time and checkerboarding). Are there any more? Almost certainly, let’s find them!
|
Eric Shepherd: Making MDN better by working on the weekend |
Last weekend, we had an MDN work weekend at Mozilla’s Paris space. Over the course of the three days—Friday, Saturday, and Sunday—we built code, wrote and updated documentation, and brainstormed on new and better ways to present documentation on MDN. A grand total of 34 participants (wow!) including 16 volunteers and 18 paid Mozilla staff sat down together in a big room and hacked. 11 countries were represented: France, the United States, Canada, the United Kingdom, Sweden, Italy, Spain, Poland, Germany, India, Bangladesh, and Brazil. We completed 23 projects and touched (either adding, updating, or closing) over 400 bugs.
The most important thing, to me, was the reminder that our community is a real, tangible presence in the world, rather than an ephemera that flits about touching documentation from time to time. These people have real jobs, having real and important impacts on their local communities. Coming together is a chance to enhance our bond as Mozillians. That’s a great thing.
What’s also great, though, is the amazing amount of output we produced. Let’s look over some of the stuff that went on (if I leave out something you did, I apologize!).
Seven new people had the Kuma build system set up on their computers, with a virtual machine running a Kuma server up and running. Those seven people included me. In fact, I devised and submitted three patches to the Kuma platform over the weekend! They’re nothing particularly big, but it did allow closing three longstanding minor bugs. Not a big deal, but I’m proud of that anyway.
And I’m not the only one: a ton of amazing enhancements to the Kuma platform were developed over the weekend. Some are already deployed; others will not be for a while, since there are quirks to iron out.
I won’t become a huge contributor to Kuma, probably. I don’t have time. I’m too busy writing and (more often, really) organizing other writers. That’s okay, of course. I’m having my own impact on our worldwide community of Mozillians and users and developers on the World Wide Web.
http://www.bitstampede.com/2014/03/14/making-mdn-better-by-working-on-the-weekend/
|
Brian R. Bondy: Switching careers, next steps |
I made one of the hardest decisions of my life, I'll be leaving Mozilla, a company that I thought I'd be at forever. Mozilla has amazing people and culture, and is easily the best job I've ever had.
I plan to remain a contributor, both on the Code Firefox site, and on Firefox Desktop itself.
I'll be joining Khan Academy, a like minded, mission oriented, open and non-profit company.
Khan Academy provides a free, world class education, to anyone, anywhere.
I'm leaving because the Khan Academy mission speaks more to me personally.
Along with that mission comes a lot of thoughts I believe in, such as:
I'll be joining the Computer Science department at Khan Academy in particular. I'm extremely excited for this new opportunity and I can't wait to do amazing things at Khan Academy.
|
Geoffrey MacDougall: Mozilla Foundation’s CRM Plans |
During our planning for 2014, a need that came up over and over again was for better data tracking across the board. This included managing our contributor targets, engagement funnels, campaigns, partners, etc.
We decided we needed a CRM. And ‘CRM’ quickly became a dumping ground for a huge wish list of features and unmet needs.
Since then, a small group has been working to figure out what we actually need, whether it’s the same thing that other Mozilla teams need, and which of us is going to do the work to put those systems in place.
Adam Lofting, Andrea Wood, and I have come up with a framework we’re going to pursue. It splits ‘CRM’ into three functions and proposes a path forward on each. We feel this represents the best use of our resources, lets us hit our priorities, and ensures that we continue to work under a ‘One Mozilla’ model.
1.) Partner Management
2.) Campaign Management
3.) Contributor Management
Does this sound right? Are there things missing? Please ask in the comments.
http://intangible.ca/2014/03/14/mozilla-foundations-crm-plans/
|
Ben Hearsum: This week in Mozilla RelEng – March 14th, 2014 |
Major Highlights:
Completed work (resolution is ‘FIXED’):
In progress work (unresolved and not assigned to nobody):
http://hearsum.ca/blog/this-week-in-mozilla-releng-march-14th-2014/
|
Armen Zambrano Gasparnian: Running hidden b2g reftests and Linux debug mochitest-browser-chrome on tbpl |
http://armenzg.blogspot.com/2014/03/running-hidden-b2g-reftests-and-linux.html
|
Mark Reid: Scheduling Telemetry Analysis |
In a previous post, I described how to run an ad-hoc analysis on the Telemetry data. There is often a need to re-run an analysis on an ongoing basis, whether it be for powering dashboards or just to see how the data changes over time.
We’ve rolled out a new version of the Self-Serve Telemetry Data Analysis Dashboard with an improved interface and some new features.
Now you can schedule analysis jobs to run on a daily, weekly, or monthly basis and publish results to a public-facing Amazon S3 bucket.
Here’s how I expect that the analysis-scheduling capability will normally be used:
slowsql
. You can add your username to
create a unique job name if you like (ie. mreid-slowsql
).daily
frequency.monthly
frequency.A concrete example of an analysis job that runs using the same framework is the
SlowSQL export. The package.sh
script creates the code tarball for this
job, and the run.sh
script actually runs the analysis on a daily basis.
In order to schedule the SlowSQL job using the above form, first I would run
package.sh
to create the code tarball, then I would fill the form as follows:
slowsql
slowsql-0.3.tar.gz
./run.sh
output
– this directory is created in run.sh
and
data is moved here after the job finishesDaily
175
– typical runs during development took around 2 hours,
so we wait just under 3 hoursThe daily data files are then published in S3 and can be used from the Telemetry SlowSQL Dashboard.
The job runner doesn’t care if your code uses the python MapReduce framework or your own hand-tuned assembly code. It is just a generalized way to launch a machine to do some processing on a scheduled basis.
So you’re free to implement your analysis using whatever tools best suit the job at hand.
The sky is the limit, as long as the code can be packaged up as a tarball and executed on a Ubuntu box.
The pseudo-code for the general job logic is
1 2 3 4 5 |
|
One final reminder: Keep in mind that the results of the analysis will be made publicly available on Amazon S3, so be absolutely sure that the output from your job does not include any sensitive data.
Aggregates of the raw data are fine, but it is very important not to output the raw data.
http://mreid-moz.github.io/blog/2014/03/14/scheduling-telemetry-analysis/
|
Fr'ed'eric Harper: I’ll speak at PragueJS, in Prague, Czech Republic |
On the evening of the 27th of March, I’ll be in Prague, Czech Republic to speak at their local JavaScript user group. Lately, my speaking gigs had the same goal: introduce people to Firefox OS. It’s less about getting people to know the product as my goal is to present them an alternative to the actual mobile duopoly. Even more important, I want to show them how they can reach a new audience with the web application they already have. Here is the abstract on my talk “Empower mobile web developers with JavaScript & WebAPI”:
HTML5 is a giant step in the right direction: it gives a lot more tools that developers need to create better experiences for users. The problem? It’s not there yet when it comes to web development for mobile devices. In this talk, Fr'ed'eric Harper will show you how you can use HTML, and JavaScript to build amazing mobile applications, and brush up what you published before. You’ll learn about the open web technologies, including WebAPIs, and tools designed to get you started developing HTML apps for Firefox OS, and the web.
So if you are in Prague, and have an interest in web development, please join me for PragueJS (the event start at 19:00) on March 27th at Node5.
--
I’ll speak at PragueJS, in Prague, Czech Republic is a post on Out of Comfort Zone from Fr'ed'eric Harper
Related posts:
|