-Поиск по дневнику

Поиск сообщений в rss_planet_mozilla

 -Подписка по e-mail

 

 -Постоянные читатели

 -Статистика

Статистика LiveInternet.ru: показано количество хитов и посетителей
Создан: 19.06.2007
Записей:
Комментариев:
Написано: 7

Planet Mozilla





Planet Mozilla - https://planet.mozilla.org/


Добавить любой RSS - источник (включая журнал LiveJournal) в свою ленту друзей вы можете на странице синдикации.

Исходная информация - http://planet.mozilla.org/.
Данный дневник сформирован из открытого RSS-источника по адресу http://planet.mozilla.org/rss20.xml, и дополняется в соответствии с дополнением данного источника. Он может не соответствовать содержимому оригинальной страницы. Трансляция создана автоматически по запросу читателей этой RSS ленты.
По всем вопросам о работе данного сервиса обращаться со страницы контактной информации.

[Обновить трансляцию]

Eitan Isaacson: Am I Vision Impaired? Who Wants to Know?

Вторник, 18 Марта 2014 г. 00:32 + в цитатник

There has been discussion recently if websites should have the ability to detect whether a visitor is using a screen reader. This was sparked by the most recent WebAIM survey that highlights a clear majority of users would indeed be comfortable divulging that information to sites.

This is not a new topic, there is a spec in the works that attempts to balance privacy, functionality and user experience. This is also a dilemma we have as implementers and have discussed this extensively in bug reports. Even my esteemed colleague Marco put down his thoughts on the topic.

I have mostly felt confusion about this question. Not about the privacy or usability concerns, but really about the semantics. I think the question “do you feel comfortable disclosing your assistive technology to the web” could be phrased in a dozen ways, each time exposing bias and assumptions about the web and computing.

The prevailing assumption is that the World Wide Web is a geo-spatial reality loosely based on the physical world. Just like a geographical site, a site on the Web resides in a specific locality. The user is a “visitor” to the site. The “site” metaphor runs very deep. When I was first shown the Web, in 1994 I remember visiting the Louvre, seeing the Mona Lisa and signing a guest book. In this world, the browser is a vehicle that takes you to distant and exotic locations. Their names suggested it: Internet Explorer, Netscape Navigator, Safari, Galeon, and the imperialistic Konquerer.

White House Home Page, circa 1994

You mean I could visit the White House from my home?? Do I need to wear a tie???

This paradigm runs deep, even though we use the Web in a very different way today, and a new mental model of the Web is prevailing.

When you check your mail on Gmail, or catch up on Twitter, you are using an application. Your browser is just a shell. In your mind, you are not virtually traveling to Silicon Valley to visit a site. You feel ownership over those applications. It is “my” twitter feed, that is “my” inbox. You will not sign a guest book. Just look at the outcry every time Facebook redesigns its timeline, or after Google does some visual refresh to its apps. Users get irate because they see this as an encroachment on their space. They were happy, and then some ambitious redesign is forcing them to get reacquainted with something they thought was theirs. That is why market-speak invented the “cloud”, which ambiguates the geography of websites and reinforces the perception that the user should stop worrying and love the data centers behind their daily life.

Depending how you see the web at any given moment may change how you view the question of assistive technology detection.

If you are applying for a loan online, you are virtually traveling to a loan office or bank. Whether you have a disability or not is none of their business, and if they take note of it while considering your application for a loan that would be a big problem (and probably illegal). In other words, you are traveling to a site. Just like you would put on a pair of pants or skirt before leaving the house, you expect your browser to be a trusty vehicle that will protect you from the dangers and exposure in the Wide World of the Web.

On the other hand, you may use Microsoft’s Office 365 every day for your job or studies. It really is just an office suite not unlike the one you used to install on your computer. In your mind, you are not traveling to Redmond to use it. It is just there, and they don’t want you to think about it any further. The local software you run has the capability to optimize itself for its environment and provide a better experience for screen reader users, and there is no reason why you would not expect that from your new “cloud office”.

But What About User Privacy?

The question of AT detection is really more about perceived privacy than actual privacy. If you had a smartphone in the last 5 years, you probably got frustrated with the mobile version of some website and downloaded the native version from the app store. Guess what? You just waived your privacy and disclosed any kind of AT usage to the app and, in turn, to the website you frequent. This whole “the Web is the platform” thing? It is a two way street. There is no such thing as an exclusively local app anymore, they are all web-enabled. When you install and run a “native” app, you can go back to that original mental model of the web and consider your actions as visiting a site. You may as well sign their guest book while you’re at it.

In fact, “local” apps today on iOS or Android may politely ask you to use your camera or access your address book, but profile your physical impairments? They don’t need special permission for that. If you installed it, they already know.

In that sense, the proposed IndieUI spec offers more privacy than is currently afforded on “native” platforms by explicitly asking the user whether to disclose that information.

Conclusion

I have no simple answers. Besides being an implementer, I don’t have enough of a stake in this. But I would like to emphasize a cliche that I hear over and over, and have finally embraced: “the Web is the platform”. The web is no longer an excursion and the browser is not a vehicle. If we truly aspire to make the web a first class platform, we need to provide the tools and capabilities that have been taken for granted on legacy platforms. But this time, we can do it better.


http://blog.monotonous.org/2014/03/17/am-i-vision-impaired-who-wants-to-know/


Benjamin Kerensa: Sponsor Debconf14

Вторник, 18 Марта 2014 г. 00:30 + в цитатник

debianorig 300x256 Sponsor Debconf14Debconf14 is just around the corner and although we are making progress on securing sponsorships there is still a lot of progress that needs to be made in order to reach our goal. I’m writing this blog post to drum up some more sponsors. So if you are reading this and are a decision maker at your company or know a decision maker and are interested in supporting Debconf14, then please check out the Debconf14 Sponsor Brochure and if still interested then reach out to us at sponsors@debconf.org. I think it goes without saying that we would love to fill some of the top sponsorship tiers.

I hope to see you in August in Portland, OR for Debconf14!

 

About Debconf

DebConf is the annual Debian developers meeting. An event filled with discussions, workshops and coding parties – all of them highly technical in nature. DebConf14, the 15th Debian Conference, will be held Portland, Oregon, USA, from August 23rd to 31st, 2014 at Portland State University. For more detailed logistical information about attending, including what to bring, and directions, please visit the DebConf14 wiki.
This year’s schedule of events will be exciting, productive and fun. As in previous years (final report 2013 [PDF]), DebConf14 features speakers from around the world. Past Debian Conferences have been extremely beneficial for developing key components of the Debian system, infrastructure and community. Next year will surely continue that tradition.

http://feedproxy.google.com/~r/BenjaminKerensaDotComMozilla/~3/Po5y13tCbc8/sponsor-debconf14


Carla Casilli: A foundational badge system design

Понедельник, 17 Марта 2014 г. 21:06 + в цитатник

The last two years or so have found me investigating and contemplating many different types of badge systems. Along the way I’ve been wrestling with considerations of badge types, potential taxonomies, and conceptual approaches; feeling my way around a variety of assessment types including summative, formative and transformative. Working in badge system design rewards a person with a continuing sense of adventure and opportunity.

A badge system structure for many
After much thought and many contemplative examinations, I’ve developed an archetypal badge system structure that I’m happy to recommend to the open badges community. Here are the many reasons why I think you’ll want to implement it.

  1. It’s simple.
  2. It’s modular.
  3. It’s easy to implement.
  4. It encourages a range of creativity.
  5. It works for organizations of vastly different sizes.
  6. It accomplishes the difficult task of working from bottom up, top-down, and middle out.
  7. It not only allows for growth, it thrives on it.

Introducing the 3 Part Badge System
This badge structure is the one that I developed for the Mozilla badge system that we are in the process of building. I’m calling it the 3 Part Badge System (3PBS). It’s composed of three interlocking parts and those three parts create a flexible structure that ensures feedback loops and allows the system to grow and evolve. Or breathe. And by breathe, I mean it allows the system to flex and bow as badges are added to it.

While some community member organizations have expressed a desire for a strict, locked-down, top-down badge system to—in their words—guarantee rigor (and you already know my thoughts on this), this system supports that request but is also designed to include active participation and badge creation from the bottom up. I’d say it’s the best of both worlds but then I’d be leaving out the middle-out capacity of this system. So in reality, it’s the best of all possible worlds.

This approach is a vote for interculturism—or the intermingling and appreciation of cultures—in badge systems. Its strength arises from the continuous periodic review of all of the badges, in particular the team / product badges as well as the individual / community badges.

Don’t tell me, show me
It’s easier to talk about this system with some visuals so I’ve designed some to help explain it. Here is the basic 3 part structure: Part 1) company / organization badges; Part 2) team / product badges; and Part 3) individual / community badges. This approach avoids a monocultural hegemony.

Carla Casilli's 3 part badge system design

The basic components of the 3 Part Badge System

Part 1: Company / organization badges
Many companies and organizations have specific needs and concerns about branding. This system addresses those concerns directly. In this proposed system, an advisory group defines, creates, and governs the highest level of badges—the company / organization badges—providing control over the all-important corporate or academic brand. While not all systems have such strict brand requirements, this approach allows for conceptual levels of badges to be created while interacting in organic and meaningful ways with other types of badges. And advisory group creates and vets these badges based on the organization’s foundational principles and values. The company/organization badges transmit the most important values of an institution and they address organizational concerns regarding brand value and perceived rigor.

Part 2: Team / product badges
Few organizations exist that do not have some middle layer accomplishing the bulk of the work of the organization; the 3PBS proposal recognizes the team / product groups as necessary and important partners. In addition to acknowledging the contributions of this collection of folks, 3PBS engenders them with the ability to respond to their public through badges. Different teams and products groups can clearly and unequivocally communicate their closely held qualities and values through the creation and issuance of their own badges. These badges are created entirely independently of the Part 1 company / organization badges. In a bit we’ll discuss how the team / product badges play a role in influencing other aspects of the 3PBS.

Part 3: Individual / community badges
So your organization is comprised only of management and teams? Of course not. The 3PBS honors the folks who are on the front lines of any organization—the community—by empowering them to define their values internally as well as externally. These badges operate outside the requirements that define the Company/organization badges and the Team/product badges. The community badges can be created by anyone within the community and do not hew to the visual requirements of the other two subsystems. What this means is that an individual or community can create any types of badges they like. In other words, it provides the ability to publicly participate—to have a voice—in the system.

How the three different parts influence one another in the 3 Part Badge System
How do these parts interact? In order to communicate how these subsystems can affect each other, I’ve created some color based graphics. You’ve already seen the first one above that describes the initial system.

But first a little basic color theory to ground our understanding of how these subsystems work together to create a dynamic and powerful system. The basic 3 part structure graphic above uses what are known as primary colors, from the Red, Yellow, Blue color model. Centuries of art are based on these three colors in this color model. The following graphics further explore the RYB color model and move us into the world of secondary colors. Secondary colors result from the mixing of two primary colors: mixing red and yellow results in orange; mixing yellow and blue results in green; mixing blue and red results in purple. Now that we’ve established how the color theory used here works, we can see how the parts represented by these colors  indicate intermixing and integration of badges.

Individual / community badges influence team / product badges
The 3PBS concept relies on badge development occurring at the individual and community level. By permitting and even encouraging community and individual level badging, the system can will continuously reform itself, adjusting badges upward in importance in the system. That’s not to say that any part of this system is superior to another, merely that these parts operate in different ways to different audiences. As I wrote in my last post, meaning is highly subjective and context-specific.

individual / community badges influencing team / product badges

Individual / community badges influencing the team / product badges in 3PBS

This graphic illustrates the team / product created and owned badges assimilating some badges from the individual / community created and owned badges. The graphic also indicates that the company / organization badges can be held separate from this influence—if so desired.

Periodic review by the team / product groups of the individual and community badges likely will reveal patterns of use and creation. These patterns are important data points worth examining closely. Through them the larger community reveals its values and aspirations. Consequently, a team or product group may choose to integrate certain individual / community badges into their own badge offerings. In this way a badge begins to operate as a recognized form of social currency, albeit a more specific or formalized currency. The result of this influencing nature? The team and product group badge subsystem refreshes itself by assimilating new areas of interest pulled directly from the larger, more comprehensive and possibly external community.

Team / product badges badges influence company / organization badges
Company and organization level badges operate in precisely the same way, although the advisory group who is responsible for this level of badge can look across both the team / product badges as well as the individual / community badges. That experience will look something like this in practice.

teamprodtransformcompany

Team / product badges influencing company / organization badges in 3PBS

Periodic review of the team / product badges by the advisory group responsible for company and organization badges may reveal duplicates as well as patterns. Discussion between the advisory group and the teams responsible for those badges may indicate that a single standard badge is appropriate. Considering that teams and product group badges are created independently by those groups, apparent duplication among teams may not necessarily be a bad thing: context is all important in the development and earning of badges. That said, examination and hybridization of some badges from the team and product groups may create a stronger, more coherent set of company and organization level badges.

Individual / community badges influence company / organization badges
In addition to being able to examine and consider team and product level badges, the advisory group responsible for the company / organization badges can also find direct inspiration from individual and community created badges. Since there are few to no rules that govern the creation of the individual / community created and owned badges, insightful, dramatic, and wildly creative badges can occur at this level. Because they come through entirely unfiltered, those sorts of badges are precisely the type to encourage rethinking of the entirety of the 3PBS.

indcommtransformcompany

Individual / community badges influencing company / organization badges in 3PBS

Here we see how the individual / community created and owned badges can significantly color the company / organization badges. Since the company / organization badges communicate universal values, it’s vital that those values remain valid and meaningful. Incorporating fresh thinking arising from individual and community badges can help to ensure that remains true.

Three parts, one whole
So, if we loop back to the original system, prior to the (color) interactions of one part to another, we can see how each part might ultimately influence one another. This is the big picture to share with interested parties who are curious as to how this might work.

The 3PBS model with different types of influence.

The 3PBS model with different types of influence.

So, that’s the 3 Part Badge System in a nutshell. Looking forward to hearing your thoughts.

—-

Much more soon. carla [at] badgealliance [dot] org


Tagged: badge system design, badge systems, community, complex adaptive system, mozilla, openbadges, system design

http://carlacasilli.wordpress.com/2014/03/17/a-foundational-badge-system-design/


Zack Weinberg: HTTP application layer integrity/authenticity guarantees

Понедельник, 17 Марта 2014 г. 20:08 + в цитатник

Note: These are half-baked ideas I’ve been turning over in my head, and should not be taken all that seriously.

Best available practice for mutually authenticated Web services (that is, both the client and the server know who the other party is) goes like this: TLS provides channel confidentiality and integrity to both parties; an X.509 certificate (countersigned by some sort of CA) offers evidence that the server is whom the client expects it to be; all resources are served from https:// URLs, thus the channel’s integrity guarantee can be taken to apply to the content; the client identifies itself to the server with either a username and password, or a third-party identity voucher (OAuth, OpenID, etc), which is exchanged for a session cookie. Nobody can impersonate the server without either subverting a CA or stealing the server’s private key, but all of the client’s proffered credentials are bearer tokens: anyone who can read them can impersonate the client to the server, probably for an extended period. TLS’s channel confidentiality assures that no one in the middle can read the tokens, but there are an awful lot of ways they can leak at the endpoints. Security-conscious sites nowadays have been adding one-time passwords and/or computer-identifying secondary cookies, but the combination of session cookie and secondary cookie is still a bearer token (possibly you also have to masquerade the client’s IP address).

Here are some design requirements for a better scheme:

  • Identify clients to servers using something that is not a bearer token: that is, even if client and server are communicating on an open (not confidential) channel, no eavesdropper gains sufficient information to impersonate client to server.
  • Provide application-layer message authentication in both directions: that is, both receivers can verify that each HTTP query and response is what the sender sent, without relying on TLS’s channel integrity assurance.
  • The application layer MACs should be cryptographically bound to the TLS server certificate (server->client) and the long-term client identity (when available) (client->server).
  • Neither party should be able to forge MACs in the name of their peer (i.e. server does not gain ability to impersonate client to a third party, and vice versa).
  • The client should not implicitly identify itself to the server when the user thinks they’re logged out.
  • Must afford at least as much design flexibility to site authors as the status quo.
  • Must gracefully degrade to the status quo when only one party supports the new system.
  • Must minimize number of additional expensive cryptographic operations on the server.
  • Must minimize server-held state.
  • Must not make server administrators deal with X.509 more than they already do.
  • Compromise of any key material that has to be held in online storage must not be a catastrophe.
  • If we can build a foundation for getting away from the CA quagmire in here somewhere, that would be nice.
  • If we can free sites from having to maintain databases of hashed passwords, that would be really nice.

The cryptographic primitives we need for this look something like:

  • A dirt-cheap asymmetric (verifier cannot forge signatures) message authentication code.
  • A mechanism for mutual agreement to session keys for the above MAC.
  • A reasonably efficient zero-knowledge proof of identity which can be bootstrapped from existing credentials (e.g. username+password pairs).
  • A way to bind one party’s contribution to the session keys to other credentials, such as the TLS shared secret, long-term client identity, and server certificate.

And here are some preliminary notes on how the protocol might work:

  • New HTTP query and response headers, sent only over TLS, declare client and server willingness to participate in the new scheme, and carry the first steps of the session key agreement protocol.
  • More new HTTP query and response headers sign each query and response once keys are negotiated.
  • The server always binds its half of the key agreement to its TLS identity (possibly via some intermediate key).
  • Upon explicit login action, the session key is renegotiated with the client identity tied in as well, and the server is provided with a zero-knowledge proof of the client’s long-term identity. This probably works via some combination of HTTP headers and new HTML form elements ( perhaps?)
  • Login provides the client with a ticket which can be used for an extended period as backup for new session key negotiations (thus providing a mechanism for automatic login for new sessions). The ticket must be useless without actual knowledge of the client’s long-term identity. The server-side state associated with this ticket must not be confidential (i.e. learning it is useless to an attacker) and ideally should be no more than a list of serial numbers for currently-valid tickets for that user.
  • Logout destroys the ticket by removing its serial number from the list.
  • If the client side of the zero-knowledge proof can be carried out in JavaScript as a fallback, the server need not store passwords at all, only ZKP verifier information; in that circumstance it would issue bearer session cookies instead of a ticket + renegotiated sesson authentication keys. (This is strictly an improvement over the status quo, so the usual objections to crypto in JS do not apply.) Servers that want to maintain compatibility with old clients that don’t support JavaScript can go on storing hashed passwords server-side.

I know all of this is possible except maybe the dirt-cheap asymmetric MAC, but I don’t know what cryptographers would pick for the primitives. I’m also not sure what to do to make it interoperable with OpenID etc.

https://www.owlfolio.org/htmletc/http-application-layer-integrityauthenticity-guarantees/


Fr'ed'eric Harper: No speakers, no conferences.

Понедельник, 17 Марта 2014 г. 19:11 + в цитатник
Creative Commons: http://j.mp/1kX36Bk

Creative Commons: http://j.mp/1kX36Bk

No speakers, no conferences. No one can argue against this, so it’s why I think speakers deserve more respect for their work. Don’t get me wrong, I had a lot of great experiences as a speaker, but I saw many situations that could have been avoided. In the end, it should lead to a better experiences for everyone: the speakers, the organizers, and the attendees.

I wrote a post on how to be a good attendee, now it’s about being a good organizer… Keep in mind that this post focuses on conferences, but everything applies to user groups or any other events with speakers.

Don’t extend your call for speakers

I planned my schedule to be sure I was able to submit my talks proposal before the deadline for your call for speakers. I don’t think it’s fair to extend that time because you didn’t get as many proposals as you wanted. I may have been able to do other important work if I had known that the deadline would be extended.

If you ask me to submit a talk, be consequent

If you took the time to ask me to submit a talk to your conference, it’s because you want me to speak at your event, no? First, be clear on what you are looking for exactly. If you don’t like the abstracts I submitted, let me know so we can work together to make it happens.

Select your speakers as soon as possible

Your call for speakers is done, it’s now time to select them. Even if we are super excited to go to your event, we can’t block our calendars ad vitam aeternam just for you! We have meetings, work to get done, other conferences, and a personal life. The sooner we know if we are accepted (or not), the better. Also, if I’m not selected, I may find other stuff or conferences to go to.

Don’t be afraid of local speakers

I see this too often now: conferences favour international speakers over local ones. As the conference become more popular or bigger, it’s seems more prestigious to do so. Don’t forget the people that were there since the beginning, if they are good speakers, of course. You also have some local superstars, why not add them to the schedule?

Promote your event in advance

I saw this too often with user groups: last minute organization, and promotion. Make the time, and effort from the speaker worth it, by making effort on your side to maximize his/her presence. You have a much higher chance to get a full room of attendees, if you start promoting the event at least one month before, than within the week before. There is nothing more annoying than speaking to an empty room.

Pay for travel, and expense

I won’t add more meat to this point as it speaks for itself, and Remy Sharp did a great blog post on the topic. My friend Christian Heilmann too did a great post about speaking is sponsoring your event. Even better, why not pay the speaker for his time? In the end, would you work for free?

Don’t offer sponsored speaking slot

I know, you need to find sources of revenue, but giving a speaking slot to someone who pay for it, means you don’t value your audience. Isn’t it your role to be sure your attendees will have the best speakers with the more interesting topics out there? Select the speakers because of their talent, and/or the topics they will talk about, not the money they are willing to pay. With a policy like that you are more likely to get better speakers as potential sponsors need to pick them by talent, not just because they are available.

I won’t do an in, and out

When I’m going to your conference, it’s to speak, but also to learn from others, and most important of all, to network. You should expect every speaker to be present before and after their talk. If you pay for my travel, and expense, don’t do it only for the day I’m speaking: I’ll be there for the whole conference. I would also appreciate that you get me in one day before so I can catch up with jet lag if it’s in a different timezone.

Give time between presentations to test my material, and computer setup

You may have a testing session the day before or in the morning, but there is nothing like testing right before your talk. I need time to plug my computer, test my remote, do an audio test, check my slides from the back of the room… A lot of things may have changed between the testing session, and the time of my talk. If there is any problem, we’ll have time to fix it.

Respect my time on stage

If you gave me one hour for my presentation, don’t tell, for any reason, that I only have thirty minutes now. As a professional speaker, I built my materials just for your audience, and to get the most out of the time I’ll have. It’s not easy to change my content like that. Also, be sure that the previous speaker finishes on time and don’t cut me off before I finished my talk.

Follow-up with your speakers

The conference is done; you are all tired, I know it’s a lot of work, I organized many events, user groups, and conferences. I still think it’s not done yet: take the time to follow up with your speakers. Thank them for their work, share the feedback you got, and let them know if you want them to be there next edition.

Again, all these points came back to one simple rule: respect your speakers.


--
No speakers, no conferences. is a post on Out of Comfort Zone from Fr'ed'eric Harper

Related posts:

  1. A better way to showcase my speaking experience One thing I like to do when someone ask me...
  2. How to be a good attendee There are numerous posts out there about being a good...
  3. So you want me to speak at your event? I got requests for speaking at user groups, conferences, workshops...

http://outofcomfortzone.net/2014/03/17/no-speakers-no-conferences/?utm_source=rss&utm_medium=rss&utm_campaign=no-speakers-no-conferences


Florian Scholz: Are we documented yet?

Понедельник, 17 Марта 2014 г. 18:04 + в цитатник

No, we aren't (yet).

Every wiki and documentation site fights against a big problem: Content outdates pretty fast as the software or the technologies evolve. This is more than true for MDN.

When we switched to the rapid release model, the way we document on MDN changed as well. A process for having release notes every six weeks has been set up by Jean-Yves Perrier (teoli) and our old tools like the "DocTracker", which gets us dev-doc-needed bugs per release, were helpful. However, writing detailed documentation for all these changes in six weeks, is no longer achievable. There are a lot of more changes in six weeks today than in six weeks two years ago.

In the old days, "yes, we are documented yet!", meant that the dev-doc-needed bug counter was near to zero for a given release.

I thought about building new tools that could answer how are doing today. Fortunately Kuma and KumaScript are open source and hackable, so that I was able to build new tools directly into MDN.

Introducing documentation status pages

So, how can we measure if we are documented yet, today?

The MDN documentation team switched to concentrate on content topics rather than to document everything belonging to a release. Several Topic Drivers started to maintain specific parts of MDN.

If you look at a content section on MDN, you can definitely identify more "health indicators" than just the dev-doc-needed bug list. To make the state of the documentation visible, we started to build documentation status pages for sections on MDN.

Let's take the JavaScript documentation status as an example to see what we currently measure.

JavaScript doc status summary

Quality indicators:

  • No tags: Every page should have a tag so that our search displays pages based on that information. For most sections there is also a tagging standard.
  • Needs* tags: Did you know that you can add a Needs tag to every MDN page to indicate that something is missing? E.g. NeedsExample or NeedsBrowserCompat? Pages tagged like this will appear here.
  • Editorial and technical reviews: You might have seen banners on several MDN articles asking for review. These pages are now listed here to get addressed at some point!
  • Outdated pages: If the last edit date of a page is more than a year ago or older than a specific date we chose, a page should be considered as outdated and will need a check.
  • Dev-doc-needed and documentation requests: Sure, let's not forget about one of the main sources of information: Bugs!
  • (...) This is the current set of "health indicators", but we might find more in the future and expand this list. Let me know if you have ideas!

Open to do list

Besides the automatically created quality indication list, there is also room for sharing what else is needed in the content section. For example, for JavaScript, we need to clean up our documentation for ECMAScript 6 and some tutorials and guides are missing, too. That information should be available in the open and shared with people interested in writing on MDN. So, if you are asking what you can do for MDN, you should find answers there!

Introducing localization status pages

When localizing MDN, we are speaking of 11,000 pages. This is a huge task and not all docs are worth translating. So, with the idea of content sections, we are also looking into making smaller lists of pages in need of localization.

To keep track of the localization work, a localizer needs to know if a translation is available and whether it is up to date with the English source. So, for each documentation status page there is also a localization status page.

Here is an example that lists the translation status of the JavaScript section for Japanese: Japanese localization status for JavaScript Also have a look of the translation overview pages, e.g. all sections for Japanese.

Help us!

With these status pages for both, English documentation and the translations, a lot of work becomes visible. If you have expertise in one or more of the topic sections, please have a look and help us with documenting. If you are a localizer and would like to have a status page for your language, feel free to add one or contact us to help you.

In general you can find us on irc.mozilla.org in #mdn.
You can also reach me at fscholz@moco or Jean-Yves at jperrier@moco.

The first iteration of these tools is now finished. The different documentation status pages we have right now, are monitoring around 40% of all MDN pages. We are looking into increasing that number.

Let's document the Open Web!

PS: Pretty graphs using the doc status API

How about a nice page like arewefastyet.com or areweslimyet.com? I have built a documentation status API. Anyone wants to collect data and make that happen? :-) For instance, a JSON for the JavaScript documentation status, can be found here.

http://florianscholz.com/2014/03/arewedocumentedyet


William Lachance: Upcoming travels to Japan and Taiwan

Понедельник, 17 Марта 2014 г. 01:19 + в цитатник

Just a quick note that I’ll shortly be travelling from the frozen land of Montreal, Canada to Japan and Taiwan over the next week, with no particular agenda other than to explore and meet people. If any Mozillians are interested in meeting up for food or drink, and discussion of FirefoxOS performance, Eideticker, entropy or anything else… feel free to contact me at wrlach@gmail.com.

Exact itinerary:

  • Thu Mar 20 – Sat Mar 22: Tokyo, Japan
  • Sat Mar 22 – Tue Mar 25: Kyoto, Japan
  • Tue Mar 25 – Thu Mar 27: Tokyo, Japan
  • Thu Mar 27 – Sun Mar 30: Taipei, Taiwan

I will also be in Taipei the week of the March 31st, though I expect most of my time to be occupied with discussions/activities inside the Taipei office about FirefoxOS performance matters (the Firefox performance team is having a work week there, and I’m tagging along to talk about / hack on Eideticker and other automation stuff).

http://wrla.ch/blog/2014/03/upcoming-travels-to-japan-and-taiwan/?utm_source=rss&utm_medium=rss&utm_campaign=upcoming-travels-to-japan-and-taiwan


Robert O'Callahan: Maokong

Воскресенье, 16 Марта 2014 г. 14:12 + в цитатник

I've been in Taiwan for a week. Last Sunday, almost immediately after checking into the hotel I went with a number of Mozilla people to Maokong Gondala and did a short hike around Maokong itself, to Yinhe Cave and back. I really enjoyed this trip. The gondala ride is quite long, and pretty. Maokong itself is devoted to the cultivation and vending of tea. I haven't seen a real tea plantation before, and I also got to see rice paddies up close, so this was quite interesting --- I'm fascinated by agricultural culture, which dominated so many people's lives for such a long time. Yinhe Cave is a cave behind a waterfall which has been fitted out as a Buddhist temple; quite an amazing spot. We capped off the afternoon by having dinner at a tea house in Maokong. A lot of the food was tea-themed --- deep-fried tea leaves, tea-flavoured omelette, etc. It was excellent, a really great destination for visitors to Taipei.

http://robert.ocallahan.org/2014/03/maokong.html


Priyanka Nag: MDN Workweek, Paris

Суббота, 15 Марта 2014 г. 23:10 + в цитатник
My first year in Mozilla was more of experimental...I began my Mozilla journey with localization. Then tried my hands on some awesome Webmaking...and finally I got into MDN. Once I got introduced to MDN, I instantly fell in love with it. The MDN project had everything I like...an awesome blend of both documentation and coding.

My MDN contribution till before the work-week had mostly been small documentation edits here and there, some playing around with KUMA and a lot of MDN Evangelism.
When I got the invitation mail for MDN workweek, Paris, I was already on cloud nine. I had got a chance of meeting a few members of the MDN team previously in Santa Clara during the Mozilla Summit, and the very idea of getting to work with them again was super thrilling!

The Super-heroes of Mozilla
The journey to Paris was also an awesome one. Afterall, Paris had been my dream since my days of 'Mills and Boons' ;)
Kaustav  and I had reached Paris on the 5th of March. It was our immediate instinct to visit the Paris office where we were hoping to find a few other participants of the work-week. We met Christian at the office. Well, walking on the streets of Paris with Christian is something not very easily achievable...that every moment I knew that this trip was going to be super awesome.

At work....in Paris office
Next day, we were honored to be able to join all the staffs in the Paris office. Kaustav and I were the only volunteers there and the pride we felt sitting in that room is beyond explanation.
MDN work-week  began on the 7th of March with a blast. There were 16 volunteers and 18 paid staff from 11 countries (France, USA, Canada, UK, Sweden, Italy, Poland, Germany, India, Bangladesh, Brazil) who worked together to finish 23 projects, touched 400+ bugs.


MDN Workweek was the perfect example of Work hard, party harder. After the entire day's hard work, we would be taken to some awesome French restaurant for mouth watering food and heavenly French wine.

Time for some French wine

My contribution in MDN Workweek

My agenda for the MDN workweek was to build a start-up page for new contributors so that whenever new contributors ask - "How do I contribute to MDN", we have the one work answer to that.
After working on it for three days, this is what I could finally manage to get done:
https://developer.mozilla.org/en-US/docs/MDN/Getting_started
Here, I shouldn't forget to thank Janet, Kaustav and Sheppy for helping me in getting the page completed.

Along with this project, I also did help Janet and Kaustav complete the event format for MDN events and publish the page on Mozilla Wiki. The page can be found here.
Another very interesting stuff that I got to work on during the work-week was the planning of Dev Derby launch with Kensie.


MDN Workweek was an experience of a lifetime and I indeed feel immensely lucky to have been able to be a part of such an event.

For the entire awesome experience, I would like to thank Ali Spivak, who had managed the entire event in the most efficient manner I have ever seen.


Other related posts:
1) Kaustav's blog-post regarding his experience
2) My blog-post about Paris



http://priyankaivy.blogspot.com/2014/03/mdn-workweek-paris.html


Ron Piovesan: Finding the market behind the numbers

Суббота, 15 Марта 2014 г. 19:41 + в цитатник

My son like puzzles. He’s only two years old but he loves to find the most complex puzzle in the house, take it apart, and then out it back together. To see a young child wanting to tackle a series of complex puzzle problems is an inspiration to watch as a parent.

It is also inspirational to watch as a business development professional.

I’m not much good at the types of puzzles my son likes to tackle, but I enjoy figuring out another type of puzzle, markets.

Sometimes those puzzles are technology-related; figuring out who the main vendors are in a market, how their products differentiate, and trying to boil it all down to understand who is disrupting whom.

You figure out these technology markets because you need to understand who to partner with, who to maybe buy, or who is or may become a competitor. Or maybe the right answer is to just leave that market alone.

Then there are global puzzles, which are the ones I’ve recently been spending a lot of time. You can read reports, you can talk to experts, but it isn’t until you go out into the various markets and talk to potential customers and partners, do you begin to understand a market.

Reports have numbers, markets have nuances. You need to be actively investigating a market, technology or global, to properly understand the nuances.


http://ronpiovesan.com/2014/03/15/finding-the-market-behind-the-numbers/


Kartikaya Gupta: The Project Premortem

Суббота, 15 Марта 2014 г. 06:33 + в цитатник

The procedure is simple: when the organization has almost come to an important decision but has not formally committed itself, [Gary] Klein proposes gathering for a brief session a group of individuals who are knowledgeable about the decision. The premise of the session is a short speech: "Imagine that we are a year into the future. We implemented the plan as it now exists. The outcome was a disaster. Please take 5 to 10 minutes to write a brief history of that disaster.


(From Thinking, Fast and Slow, by Daniel Kahneman)


When I first read about this, it immediately struck me as a very simple but powerful way to mitigate failure. I was interested in trying it out, so I wrote a pre-mortem story for Firefox OS. The thing I wrote turned out to be more like something a journalist would write in some internet rag 5 years from now, but I found it very illuminating to go through the exercise and think of different ways in which we could fail, and to isolate the one I thought most likely.

In fact, I would really like to encourage more people to go through this exercise and have everybody post their results somewhere. I would love to read about what keeps you up at night with respect to the project, what you think our weak points are, and what we need to watch out for. By understanding each others' worries and fears, I feel that we can do a better job of accommodating them in our day-to-day work, and work together more cohesively to Get The Job Done (TM).

Please comment on this post if you are interested in trying this out. I would be very happy to coordinate stuff so that people write out their thoughts and submit them, and we post all the results together (even anonymously if so desired). That way nobody is biased by anybody else's writing.

https://staktrace.com/spout/entry.php?id=821


Kartikaya Gupta: The Project Premortem

Суббота, 15 Марта 2014 г. 06:27 + в цитатник

The procedure is simple: when the organization has almost come to an important decision but has not formally committed itself, [Gary] Klein proposes gathering for a brief session a group of individuals who are knowledgeable about the decision. The premise of the session is a short speech: "Imagine that we are a year into the future. We implemented the plan as it now exists. The outcome was a disaster. Please take 5 to 10 minutes to write a brief history of that disaster.


(From Thinking, Fast and Slow, by Daniel Kahneman)

When I first read about this, it immediately struck me as a very simple but powerful way to mitigate failure. I was interested in trying it out, so I wrote a pre-mortem story for Firefox OS. The thing I wrote turned out to be more like something a journalist would write in some internet rag 5 years from now, but I found it very illuminating to go through the exercise and think of different ways in which we could fail, and to isolate the one I thought most likely.

In fact, I would really like to encourage more people to go through this exercise and have everybody post their results somewhere. I would love to read about what keeps you up at night with respect to the project, what you think our weak points are, and what we need to watch out for. By understanding each others' worries and fears, I feel that we can do a better job of accommodating them in our day-to-day work, and work together more cohesively to Get The Job Done (TM).

Please comment on this post if you are interested in trying this out. I would be very happy to coordinate stuff so that people write out their thoughts and submit them, and we post all the results together (even anonymously if so desired). That way nobody is biased by anybody else's writing.

https://staktrace.com/spout/entry.php?id=820


William Lachance: It’s all about the entropy

Суббота, 15 Марта 2014 г. 03:52 + в цитатник

[ For more information on the Eideticker software I'm referring to, see this entry ]

So recently I’ve been exploring new and different methods of measuring things that we care about on FirefoxOS — like startup time or amount of checkerboarding. With Android, where we have a mostly clean signal, these measurements were pretty straightforward. Want to measure startup times? Just capture a video of Firefox starting, then compare the frames pixel by pixel to see how much they differ. When the pixels aren’t that different anymore, we’re “done”. Likewise, to measure checkerboarding we just calculated the areas of the screen where things were not completely drawn yet, frame-by-frame.

On FirefoxOS, where we’re using a camera to measure these things, it has not been so simple. I’ve already discussed this with respect to startup time in a previous post. One of the ideas I talk about there is “entropy” (or the amount of unique information in the frame). It turns out that this is a pretty deep concept, and is useful for even more things than I thought of at the time. Since this is probably a concept that people are going to be thinking/talking about for a while, it’s worth going into a little more detail about the math behind it.

The wikipedia article on information theoretic entropy is a pretty good introduction. You should read it. It all boils down to this formula:

wikipedia-entropy-formula

You can see this section of the wikipedia article (and the various articles that it links to) if you want to break down where that comes from, but the short answer is that given a set of random samples, the more different values there are, the higher the entropy will be. Look at it from a probabilistic point of view: if you take a random set of data and want to make predictions on what future data will look like. If it is highly random, it will be harder to predict what comes next. Conversely, if it is more uniform it is easier to predict what form it will take.

Another, possibly more accessible way of thinking about the entropy of a given set of data would be “how well would it compress?”. For example, a bitmap image with nothing but black in it could compress very well as there’s essentially only 1 piece of unique information in it repeated many times — the black pixel. On the other hand, a bitmap image of completely randomly generated pixels would probably compress very badly, as almost every pixel represents several dimensions of unique information. For all the statistics terminology, etc. that’s all the above formula is trying to say.

So we have a model of entropy, now what? For Eideticker, the question is — how can we break the frame data we’re gathering down into a form that’s amenable to this kind of analysis? The approach I took (on the recommendation of this article) was to create a histogram with 256 bins (representing the number of distinct possibilities in a black & white capture) out of all the pixels in the frame, then run the formula over that. The exact function I wound up using looks like this:


def _get_frame_entropy((i, capture, sobelized)):
    frame = capture.get_frame(i, True).astype('float')
    if sobelized:
        frame = ndimage.median_filter(frame, 3)

        dx = ndimage.sobel(frame, 0)  # horizontal derivative
        dy = ndimage.sobel(frame, 1)  # vertical derivative
        frame = numpy.hypot(dx, dy)  # magnitude
        frame *= 255.0 / numpy.max(frame)  # normalize (Q&D)

    histogram = numpy.histogram(frame, bins=256)[0]
    histogram_length = sum(histogram)
    samples_probability = [float(h) / histogram_length for h in histogram]
    entropy = -sum([p * math.log(p, 2) for p in samples_probability if p != 0])

    return entropy

[Context]

The “sobelized” bit allows us to optionally convolve the frame with a sobel filter before running the entropy calculation, which removes most of the data in the capture except for the edges. This is especially useful for FirefoxOS, where the signal has quite a bit of random noise from ambient lighting that artificially inflate the entropy values even in places where there is little actual “information”.

This type of transformation often reveals very interesting information about what’s going on in an eideticker test. For example, take this video of the user panning down in the contacts app:

If you graph the entropies of the frame of the capture using the formula above you, you get a graph like this:

contacts scrolling entropy graph
[Link to original]

The Y axis represents entropy, as calculated by the code above. There is no inherently “right” value for this — it all depends on the application you’re testing and what you expect to see displayed on the screen. In general though, higher values are better as it indicates more frames of the capture are “complete”.

The region at the beginning where it is at about 5.0 represents the contacts app with a set of contacts fully displayed (at startup). The “flat” regions where the entropy is at roughly 4.25? Those are the areas where the app is “checkerboarding” (blanking out waiting for graphics or layout engine to draw contact information). Click through to the original and swipe over the graph to see what I mean.

It’s easy to see what a hypothetical ideal end state would be for this capture: a graph with a smooth entropy of about 5.0 (similar to the start state, where all contacts are fully drawn in). We can track our progress towards this goal (or our deviation from it), by watching the eideticker b2g dashboard and seeing if the summation of the entropy values for frames over the entire test increases or decreases over time. If we see it generally increase, that probably means we’re seeing less checkerboarding in the capture. If we see it decrease, that might mean we’re now seeing checkerboarding where we weren’t before.

It’s too early to say for sure, but over the past few days the trend has been positive:

entropy-levels-climbing
[Link to original]

(note that there were some problems in the way the tests were being run before, so results before the 12th should not be considered valid)

So one concept, at least two relevant metrics we can measure with it (startup time and checkerboarding). Are there any more? Almost certainly, let’s find them!

http://wrla.ch/blog/2014/03/its-all-about-the-entropy/?utm_source=rss&utm_medium=rss&utm_campaign=its-all-about-the-entropy


Eric Shepherd: Making MDN better by working on the weekend

Суббота, 15 Марта 2014 г. 02:47 + в цитатник

Last weekend, we had an MDN work weekend at Mozilla’s Paris space. Over the course of the three days—Friday, Saturday, and Sunday—we built code, wrote and updated documentation, and brainstormed on new and better ways to present documentation on MDN. A grand total of 34 participants (wow!) including 16 volunteers and 18 paid Mozilla staff sat down together in a big room and hacked. 11 countries were represented: France, the United States, Canada, the United Kingdom, Sweden, Italy, Spain, Poland, Germany, India, Bangladesh, and Brazil. We completed 23 projects and touched (either adding, updating, or closing) over 400 bugs.

The most important thing, to me, was the reminder that our community is a real, tangible presence in the world, rather than an ephemera that flits about touching documentation from time to time. These people have real jobs, having real and important impacts on their local communities. Coming together is a chance to enhance our bond as Mozillians. That’s a great thing.

What’s also great, though, is the amazing amount of output we produced. Let’s look over some of the stuff that went on (if I leave out something you did, I apologize!).

Documentation work

  • Prototyped new UX for the MDN App Center.
  • All KumaScript errors on every page in the English, German, and French locales were resolved! This is a huge deal and I’m grateful to Jean-Yves, Florian, and Sphinx for this work.
  • Lots of work was done to sketch out and plan an improved onboarding experience for new contributors.
  • Lots of new Web documentation was added for various CSS properties, as well as for the HTML element used by Web Components.
  • Over 100 pages of beginner content were properly tagged to be easier to find using MDN’s search filters.
  • Planning work for the new MDN “Learning” area was done; this area will provide content for new Web and Web app developers.
  • Work to plan out requirements for future MDN events was done.
  • Planning for the next steps of the compatibility data project was done; I missed this meeting although I meant to be there. We will be building a system for maintaining a database, in essence, of compatibility for all the things that make up the Web, then updating our compatibility tables to be constructed from that database. This database will also be queryable.
  • Progress was made on documenting the Web Audio API. Thanks to Scott Michaud for his work on this.
  • Chris Heilmann worked on adding live samples to pages; his work included some experiments in ways to make them more interactive, and he talked with our dev team about an improved user interface for implementing live samples.

Kuma platform work

Seven new people had the Kuma build system set up on their computers, with a virtual machine running a Kuma server up and running. Those seven people included me. In fact, I devised and submitted three patches to the Kuma platform over the weekend! They’re nothing particularly big, but it did allow closing three longstanding minor bugs. Not a big deal, but I’m proud of that anyway.

And I’m not the only one: a ton of amazing enhancements to the Kuma platform were developed over the weekend. Some are already deployed; others will not be for a while, since there are quirks to iron out.

  • Live samples are now better delineated by having a border drawn around them (this one is mine, and already deployed).
  • A new JavaScript snippet has been developed that can be embedded into any Web site to automatically link common terms to the appropriate pages on MDN. This is almost but not quite ready to deploy as I write this.
  • A bunch of old code we inherited from the Kitsune project has been removed (Ricky did this).
  • The email sent to admins after a page move operation is completed has been enhanced to include a link to the new location and to have the subject be more specific as to which move finished (another of mine; not yet deployed but probably will go out in the next push).
  • The “revert this page” confirmation prompt has some wording corrections (mine, and pushed to production).
  • A new “top contributors” widget has been developed; this will show the top three contributors to a page, with their avatar image, at the top of the page somewhere. This isn’t finished but the prototype is promising. This work was done primarily by Luke Crouch, I believe.
  • UX design work was done for future enhancements to how we display search results.
  • UX design work was done for how we display the language list, to improve discoverability both of the fact that there are multiple languages, but also that pages can be localized by contributors.
  • The revision diff page you get when comparing two pages in an article’s history now includes a “Revert to this revision” button. Also mine; I don’t know if it’s on production yet but I think so.
  • Lots of design and planning work for improved search UX, filtering, and more. This stuff will amaze you when it lands!

I won’t become a huge contributor to Kuma, probably. I don’t have time. I’m too busy writing and (more often, really) organizing other writers. That’s okay, of course. I’m having my own impact on our worldwide community of Mozillians and users and developers on the World Wide Web.

http://www.bitstampede.com/2014/03/14/making-mdn-better-by-working-on-the-weekend/


Brian R. Bondy: Switching careers, next steps

Суббота, 15 Марта 2014 г. 00:58 + в цитатник

I made one of the hardest decisions of my life, I'll be leaving Mozilla, a company that I thought I'd be at forever. Mozilla has amazing people and culture, and is easily the best job I've ever had.

I plan to remain a contributor, both on the Code Firefox site, and on Firefox Desktop itself.


I'll be joining Khan Academy, a like minded, mission oriented, open and non-profit company.

Khan Academy provides a free, world class education, to anyone, anywhere.

I'm leaving because the Khan Academy mission speaks more to me personally.

Along with that mission comes a lot of thoughts I believe in, such as:

  • Sense of agency: Students should be responsible for their own educational goals.
  • Mastery based learning: Students should have deep, sound understanding of concepts before progressing to concepts based on those concepts.
  • Self paced learning: Students should progress at their own pace, as they've mastered concepts.
  • Mentorship: Having peers teach, and having peers learn from each other comes with several amazing benefits.
  • Interactive: Human connection and tangible experiences are important.

I'll be joining the Computer Science department at Khan Academy in particular. I'm extremely excited for this new opportunity and I can't wait to do amazing things at Khan Academy.

http://www.brianbondy.com/blog/id/160


Geoffrey MacDougall: Mozilla Foundation’s CRM Plans

Суббота, 15 Марта 2014 г. 00:36 + в цитатник

During our planning for 2014, a need that came up over and over again was for better data tracking across the board. This included managing our contributor targets, engagement funnels, campaigns, partners, etc.

We decided we needed a CRM. And ‘CRM’ quickly became a dumping ground for a huge wish list of features and unmet needs.

Since then, a small group has been working to figure out what we actually need, whether it’s the same thing that other Mozilla teams need, and which of us is going to do the work to put those systems in place.

Adam Lofting, Andrea Wood, and I have come up with a framework we’re going to pursue. It splits ‘CRM’ into three functions and proposes a path forward on each. We feel this represents the best use of our resources, lets us hit our priorities, and ensures that we continue to work under a ‘One Mozilla’ model.

1.) Partner Management

  • What this means: Traditional CRM features such as shared contact lists, joint documents, status updates, and communications management.
  • How we’d use it: To manage our relationships with Webmaker partners, BadgeKit adopters, and institutional funders.
  • The plan: Mozilla’s IT department is leading a CRM implementation to manage deal flow and other relationships behind the FFOS Marketplace and emerging content partners. The Foundation will adopt whatever system is put in place from that process. This is based on the assumptions that (i) ‘partner management’ is a fairly standardized process, (ii) the Foundation’s needs can be mapped onto whichever tool IT selects, and (iii) there are benefits to be had from working within the same framework as our colleagues in business development.

2.) Campaign Management

  • What this means: E-mail, small dollar fundraising, activism, standalone campaign web sites, event registration, and other outreach and engagement activities.
  • How we’d use it: To promote our events and programs, to manage registration for events, to run activism campaigns, and to anchor our small dollar fundraising.
  • The plan: The Foundation and Corporation engagement teams are working to combine budgets, gather requirements, and launch an RfP process to select a platform to run both programs. We already work together on the design and implementation of campaigns, and a shared technology platform will make that collaboration more efficient and avoid current, user-impacting issues resulting from multiple tools managing multiple e-mail lists.

3.) Contributor Management

  • What this means: Outreach, engagement, metrics, and recognition behind the Grow Mozilla goal of reaching 1 million contributors to our project.
  • How we’d use it: Metrics and analytics on engagement ladders, measuring contribution numbers, and rewarding and recognizing our contributors.
  • The plan: This is the area where we will probably have to build our own, Mozilla-wide solution. The People, Engagement, Business Intelligence, Foundation, and Open Badges teams are currently working together to figure out what that will entail. The solution will most likely involve some combination of program-specific engagement funnels, metrics and analysis through Project Baloo, and reward and recognition through Open Badges. More as it unfolds.

Does this sound right? Are there things missing? Please ask in the comments.


Filed under: Mozilla

http://intangible.ca/2014/03/14/mozilla-foundations-crm-plans/


Ben Hearsum: This week in Mozilla RelEng – March 14th, 2014

Суббота, 15 Марта 2014 г. 00:12 + в цитатник

Major Highlights:

Completed work (resolution is ‘FIXED’):

In progress work (unresolved and not assigned to nobody):

http://hearsum.ca/blog/this-week-in-mozilla-releng-march-14th-2014/


Armen Zambrano Gasparnian: Running hidden b2g reftests and Linux debug mochitest-browser-chrome on tbpl

Пятница, 14 Марта 2014 г. 23:00 + в цитатник
For many months we have been working with the A-team and developers to move away every single job from our old Mac mini pool.

This project is very important as we're moving out of the data-centre that holds the minis in the next few months. The sooner we get out of there the more we can save. Moving the machines out there will start in April/May.

Currently we run on the minis:
  • Linux 32 debug mochitest-browser-chrome
  • Linux 64 debug mochitest-browser-chrome
  • B2G reftests
This week we have enabled these jobs on EC2 for mozilla-inbound. We will run them side-by-side until we're at par on coverage. They are currently hidden to help us see these jobs running at scale.

Here are the URLs to see them running live:
As you read this post we're landing and merging more patches to deal with the remaining known issues.
Stay tuned for more!





Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

http://armenzg.blogspot.com/2014/03/running-hidden-b2g-reftests-and-linux.html


Mark Reid: Scheduling Telemetry Analysis

Пятница, 14 Марта 2014 г. 21:57 + в цитатник

In a previous post, I described how to run an ad-hoc analysis on the Telemetry data. There is often a need to re-run an analysis on an ongoing basis, whether it be for powering dashboards or just to see how the data changes over time.

We’ve rolled out a new version of the Self-Serve Telemetry Data Analysis Dashboard with an improved interface and some new features.

Now you can schedule analysis jobs to run on a daily, weekly, or monthly basis and publish results to a public-facing Amazon S3 bucket.

Typical Scheduling

Here’s how I expect that the analysis-scheduling capability will normally be used:

  1. Log in to telemetry-dash.mozilla.org
  2. Launch an analysis worker in the cloud
  3. Develop and test your analysis code
  4. Create a wrapper script to:
    • Do any required setup (install python packages, etc)
    • Run the analysis
    • Put output files in a given directory relative to the script itself
  5. Download and save your code from the worker instance
  6. Create a tarball containing all the required code
  7. Test your wrapper script:
    • Launch a fresh analysis worker
    • Run your wrapper script
    • Verify output looks good
  8. Head back to the analysis dashboard, and schedule your analysis to run as needed

Dissecting the “Schedule a Job” form

  • Job Name is a unique identifier for your job. It should be a short, descriptive string. Think “what would I call this job in a code repo or hostname?” For example, the job that runs the data export for the SlowSQL dashboard is called slowsql. You can add your username to create a unique job name if you like (ie. mreid-slowsql).
  • Code Tarball is the archive containing all the files needed to run your analysis. The machine on which the job runs is a bare-bones Ubuntu machine with a few common dependencies installed (git, xz-utils, etc), and it is up to your code to install anything extra that might be needed.
  • Execution Commandline is what will be invoked on the launched server. It is the entry point to your job. You can specify an arbitrary Bash command.
  • Output Directory is the location of results to be published to S3. Again, these results will be publicly visible.
  • Schedule Frequency is how often the job will run.
  • Day of Week for jobs running with daily frequency.
  • Day of Month for jobs running with monthly frequency.
  • Time of Day (UTC) is when the job will run. Due to the distributed nature of the Telemetry data processing pipeline, there may be some delay before the previous day’s data is fully available. So if your job is processing data from “yesterday”, I recommend using the default vaulue of Noon UTC.
  • Job Timeout is the failsafe for jobs that don’t terminate on their own. If the job does not complete within the specified number of minutes, it will be forcibly terminated. This avoids having stalled jobs run forever (racking up our AWS bill the whole time).

Example: SlowSQL

A concrete example of an analysis job that runs using the same framework is the SlowSQL export. The package.sh script creates the code tarball for this job, and the run.sh script actually runs the analysis on a daily basis.

In order to schedule the SlowSQL job using the above form, first I would run package.sh to create the code tarball, then I would fill the form as follows:

  • Job Name: slowsql
  • Code Tarball: select slowsql-0.3.tar.gz
  • Execution Commandline: ./run.sh
  • Output Directory: output – this directory is created in run.sh and data is moved here after the job finishes
  • Schedule Frequency: Daily
  • Day of Week: leave as default
  • Day of Month: leave as default
  • Time of Day (UTC): leave as default
  • Job Timeout: 175 – typical runs during development took around 2 hours, so we wait just under 3 hours

The daily data files are then published in S3 and can be used from the Telemetry SlowSQL Dashboard.

Beyond Typical Scheduling

The job runner doesn’t care if your code uses the python MapReduce framework or your own hand-tuned assembly code. It is just a generalized way to launch a machine to do some processing on a scheduled basis.

So you’re free to implement your analysis using whatever tools best suit the job at hand.

The sky is the limit, as long as the code can be packaged up as a tarball and executed on a Ubuntu box.

The pseudo-code for the general job logic is

1
2
3
4
5
fetch $code_tarball
tar xzvf $code_tarball
`$execution_commandline`
cd $output_directory
publish --recursive ./ s3://public/$job_name/data/

Published Output

One final reminder: Keep in mind that the results of the analysis will be made publicly available on Amazon S3, so be absolutely sure that the output from your job does not include any sensitive data.

Aggregates of the raw data are fine, but it is very important not to output the raw data.

http://mreid-moz.github.io/blog/2014/03/14/scheduling-telemetry-analysis/


Fr'ed'eric Harper: I’ll speak at PragueJS, in Prague, Czech Republic

Пятница, 14 Марта 2014 г. 17:56 + в цитатник

PragueJS

On the evening of the 27th of March, I’ll be in Prague, Czech Republic to speak at their local JavaScript user group. Lately, my speaking gigs had the same goal: introduce people to Firefox OS. It’s less about getting people to know the product as my goal is to present them an alternative to the actual mobile duopoly. Even more important, I want to show them how they can reach a new audience with the web application they already have. Here is the abstract on my talk “Empower mobile web developers with JavaScript & WebAPI”:

HTML5 is a giant step in the right direction: it gives a lot more tools that developers need to create better experiences for users. The problem? It’s not there yet when it comes to web development for mobile devices. In this talk, Fr'ed'eric Harper will show you how you can use HTML, and JavaScript to build amazing mobile applications, and brush up what you published before. You’ll learn about the open web technologies, including WebAPIs, and tools designed to get you started developing HTML apps for Firefox OS, and the web.

So if you are in Prague, and have an interest in web development, please join me for PragueJS (the event start at 19:00) on March 27th at Node5.


--
I’ll speak at PragueJS, in Prague, Czech Republic is a post on Out of Comfort Zone from Fr'ed'eric Harper

Related posts:

  1. Mobile Startups Toronto, and Firefox OS as an opportunity The year is not yet done that I’m starting to...
  2. Firefox OS in Guadalajara, Mexico I usually do a blog post after my presentations, and...
  3. See you at FITC Toronto in April I’m a big fan of the FITC conferences in Toronto:...

http://outofcomfortzone.net/2014/03/14/ill-speak-at-praguejs-in-prague-czech-republic/?utm_source=rss&utm_medium=rss&utm_campaign=ill-speak-at-praguejs-in-prague-czech-republic



Поиск сообщений в rss_planet_mozilla
Страницы: 472 ... 29 28 [27] 26 25 ..
.. 1 Календарь