-Поиск по дневнику

Поиск сообщений в rss_planet_mozilla

 -Подписка по e-mail

 

 -Постоянные читатели

 -Статистика

Статистика LiveInternet.ru: показано количество хитов и посетителей
Создан: 19.06.2007
Записей:
Комментариев:
Написано: 7

Planet Mozilla





Planet Mozilla - https://planet.mozilla.org/


Добавить любой RSS - источник (включая журнал LiveJournal) в свою ленту друзей вы можете на странице синдикации.

Исходная информация - http://planet.mozilla.org/.
Данный дневник сформирован из открытого RSS-источника по адресу http://planet.mozilla.org/rss20.xml, и дополняется в соответствии с дополнением данного источника. Он может не соответствовать содержимому оригинальной страницы. Трансляция создана автоматически по запросу читателей этой RSS ленты.
По всем вопросам о работе данного сервиса обращаться со страницы контактной информации.

[Обновить трансляцию]

Tantek Celik: Simplifying Standards & Reducing Their Security Surface: Towards A Minimum Viable Web Platform

Вторник, 10 Марта 2015 г. 07:00 + в цитатник

At the start of this month, I posted a simple note and question:

Thoughts yesterday lunch w @bcrypt: @W3C specs too big/complex. How do we simplify WebAPIs to reduce security surface?

With follow-up:

And @W3C needs a Security (#s6y) group that reviews all specs, like #i18n & #a11y (WAI) groups do. cc: @bcrypt @W3CAB

Which kicked off quite a conversation on Twitter (18 replies shown on load, 53 more dynamically upon scrolling if various scripts are able to load & execute).

Security & Privacy Reviews

Buried among those replies was one particularly constructive, if understated, reply from Mike West:

[…] mikewest.github.io/spec-questionnaire/security-privacy/ is an initial strawman for security/privacy self-review.

A good set of questions (even if incomplete) to answer in a self-review of a specification is an excellent start towards building a culture of reviewing security & privacy features of web standards.

While self-reviews are a good start, and will hopefully catch (or indicate the unsureness about) some security and/or privacy issues, I do still think we need a security group, made up of those more experienced in web security and privacy concerns, to review all specifications before they advance to being standards.

Such expert reviews could also be done continuously for "living" specifications, where a security review of a specification could be published as of a certain revision (snapshot) of a living specification, which then hopefully could be incrementally updated along with updates to the spec itself.

Specification Section for Security & Privacy Considerations

In follow-up email Mike asked for feedback on specifics regarding the questionnaire which I provided as a braindump email reply, and offered to also submit as a pull request as well. After checking with Yan, who was also on the email, I decided to go ahead and do so. After non-trivially expanding a section, very likely beyond its original intent and scope (meta-ironically so), it seemed more appropriate to at least blog it in addition to a pull request.

The last question of the questionnaire asks:

Does this specification have a "Security Considerations" and "Privacy Considerations" section?

Rather than the brief two sentence paragraph starting with Not every feature has security or privacy impacts, which I think deserves a better reframing, I've submitted the below replacement text (after the heading) as a pull request.

Reducing Security Surface Towards Minimum Viability

Unless proven otherwise, every feature has potential security and/or privacy impacts.

Documenting the various concerns that have cropped up in one form or another is a good way to help implementers and authors understand the risks that a feature presents, and ensure that adequate mitigations are in place.

If it seems like a feature does not have security or privacy impacts, then say so inline in the spec section for that feature:

There are no known security or privacy impacts of this feature.

Saying so explicitly in the specification serves several purposes:

  1. Shows that a spec author/editor has possibly considered (hopefully not just copy/pasted) whether there are such impacts.
  2. Provides some sense of confidence that there are no such impacts.
  3. Challenges security and privacy minded individuals to think of and find even the potential for such impacts.
  4. Demonstrates the spec author/editor's receptivity to feedback about such impacts.

The easiest way to mitigate potential negative security or privacy impacts of a feature, and even discussing the possibility, is to drop the feature.

Every feature in a spec should be considered guilty (of harming security and/or privacy) until proven otherwise. Every specification should seek to be as small as possible, even if only for the reasons of reducing and minimizing security/privacy attack surface(s).

By doing so we can reduce the overall security (and privacy) attack surface of not only a particular feature, but of a module (related set of features), a specification, and the overall web platform. Ideally this is one of many motivations to reduce each of those to the minimum viable:

  1. Minimum viable feature: cut/drop values, options, or optional aspects.
  2. Minimum viable web format/protocol/API: cut/drop a module, or even just one feature.
  3. Minimum viable web platform: Cut/drop/obsolete entire specification(s).

Questions and Challenges

The above text expresses a specific opinion and perspective about not only web security, web standards, but goals and ideals for the web platform as whole. In some ways it raises more questions than answers.

How do you determine minimum viability?

How do you incentivize (beyond security & privacy) the simplification and minimizing of web platform features?

How do we confront the various counter-incentives?

Or rather:

How do we document and cope with the numerous incentives for complexity and obfuscation that come from so many sources (some mentioned in that Twitter thread) that seem in total insurmountable?

No easy answers here. Perhaps material for more posts on the subject.

Thanks to Yan for reviewing drafts of this post.

http://tantek.com/2015/068/b1/security-towards-minimum-viable-web-platform


Justin Crawford: MDN Product Talk: Introduction

Вторник, 10 Марта 2015 г. 01:24 + в цитатник

In coming days I will post a series of blog posts about MDN, the product I am product manager for. I will talk about MDN’s business case, product strategy, and a series of experiments we can run on MDN in 2015 (and beyond) to help it continue to serve its audience and the web.

Many people familiar with MDN will consider some of the following posts obvious; to them I say, “stay with me.” Not all of this will be obvious to everyone. Some may be novel to everyone. Any of it may need clarification or improvement, which I will learn from gentle comments.

mdn_logo-wordmark-full_colorAs a new member of the MDN team, I taxed my colleagues in pursuit of answers to such questions as…

  • What is MDN?
  • Whom is MDN for?
  • What are their problems?
  • How does MDN solve those problems?
  • What does solving those problems accomplish for Mozilla?

I posed such questions to almost everyone I could corner, from members of Mozilla’s steering committee to random web developers I met at kids’ birthday parties. I interrogated historic mailing list threads. I scanned through hundreds of enhancement requests in our bug backlog. I read books about product management. I doodled architectural diagrams and flowcharts. And finally I think I came to an understanding. I hope sharing it is helpful.

So what is MDN? Is it a documentation site? A developer program? A wiki? A network? A suite of products? A single product? A brand? A railway station? It depends on whom you ask and when. Few of the people I cornered could say exactly what MDN is or ought to be. Most had an idea of what MDN ought to accomplish, but their ideas varied wildly.

One thing is clear: MDN is successfully serving a large portion of its potential market and has been doing so in its present form for nearly 10 years. Which is great. But… what is MDN?

For most of MDN’s primary audience — web developers — MDN is a documentation wiki (i.e. a user-generated content site or UGC) serving trustworthy information about open web technologies (primarily HTML, JavaScript and CSS). And of course the audience is correct.

But MDN is also a brand that resonates with its audience and fans. And MDN is also a community of contributors who care about MDN. And MDN is also a platform where all these things mingle and mix.

All told, MDN includes (at least) 6 product offerings in operation or development and (at least) 8 significant enhancement projects underway, as well as numerous activities and projects that sustain the documentation wiki. The interplay between these activities is complex.

OK, great; but what is MDN?!? Considering its scope — the variety of its critical stakeholders, the complexity of its value chain, the number of supporting activities underway — I think MDN can only be considered a product ecosystem.

MDN_product_ecosystem

  • MDN is a strong brand supported a number of products and activities.
  • Chief among these is a platform, the documentation wiki, which is also called MDN.
  • Within the platform content is huge: MDN’s primary audience visits solely for the content, because the content is valuable. The MDN brand’s authority and the MDN platform’s scale both depend on the MDN content’s quality.
  • As an open-source web application serving user-generated content about open standards, contribution overlaps almost every aspect MDN — especially content creation. The MDN content’s quality depends in large part on contribution.
  • Various marketing efforts support MDN, strengthening its brand, attracting new visitors, and activating contributors. MDN marketing efforts among developers also support Mozilla’s brand.
  • In response to a clear market need, the MDN team is experimenting with some developer-facing services that will be partly supported by the content and platform.

That’s a lot of moving pieces — and we haven’t even begun talking about whom MDN serves and what problems MDN solves for them! Look for answers to those questions and much more in coming posts.

As we go I’ll use the above diagram to help contextualize things. More to come!

:wq

http://hoosteeno.com/2015/03/09/mdn-product-talk-introduction/


David Burns: Marionette - Where we are

Вторник, 10 Марта 2015 г. 00:44 + в цитатник

I thought that I spend some time describing where we are with the Marionette project. For those that don't know, Marionette is the project to implement WebDriver in Firefox. We are implementing the WebDriver based on the W3C WebDriver specification.

We have working quite hard to get as much of implementation done as per specification. One thing to note is there is a few places where the specification and the open source project have diverged but hopefully a Selenium 3 release can align them again.

So... what is left to do for the Marionette project to be able to ship it's 1.0 release?

and a few other things. Feel free to look at our current roadmap!

That means we have some of the big ticket items, like modal dialog support, landed! We have some of the actions landed and most importantly we have large parts of a driver executable (written in Rust!), like chromedriver or internetexplorerdriver, completed.

Somethings are going slower than anticipated and other sections are shooting along so all in all I am really pleased with the current progress!

If you want to help out, we have a number of good first bugs that you can all join in!

http://www.theautomatedtester.co.uk/blog/2015/marionette-where-we-are.html


Air Mozilla: Mozilla Weekly Project Meeting

Понедельник, 09 Марта 2015 г. 21:00 + в цитатник

Mozilla WebDev Community: Webdev Extravaganza – March 6th

Понедельник, 09 Марта 2015 г. 19:32 + в цитатник

Once a month, web developers from across Mozilla get together to practice our dowsing technique. While we compare the latest models of dowsing rods, we find time to talk about the work that we’ve shipped, share the libraries we’re working on, meet new folks, and talk about whatever else is on our minds. It’s the Webdev Extravaganza! The meeting is open to the public; you should stop by!

You can check out the wiki page that we use to organize the meeting, or view a recording of the meeting in Air Mozilla. Or just read on for a summary!

Shipping Celebration

The shipping celebration is for anything we finished and deployed in the past month, whether it be a brand new site, an upgrade to an existing one, or even a release of a library.

Bedrock Static Media

First up was pmac, who shared the news that Bedrock has switched to using Django static files for static media instead of using the older MEDIA_URL-based methods. As part of this switch, they’ve switched from jingo-minify to django-pipeline, switched from using Apache to serve the static files to Whitenoise (paired with the existing CDN), and switched to Python 2.7.

Contribute.json is now Responsive

Next, peterbe told us that the contribute.json webpage is now responsive thanks to a patch from the ever-impressive contributor koddsson.

MasterFirefoxOS Training Hub

jgmize informed us that masterfirefoxos.mozilla.org launched. It’s a training website for people working in retail to sell Firefox OS phones. Due to the small audience for the site, they were able to experiment with a bunch of new ideas for creating and hosting our websites, including using the sugardough application template (which comes with Django 1.7, Python 3.4, and Docker + Fig for local development), running the site on a Deis cluster on Amazon Web Services, and implementing continuous delivery via automated deploys with Jenkins.

Input on Django 1.7

r1cky wants people to know that Input upgraded to Django 1.7. Yay!

DXR Holding Back for Your Safety

ErikRose mentioned some work on DXR that has yet to ship (but will soon, hopefully), including some improvements to Python language support (courtesy of yours truly), image thumbnails (courtesy of new_one, a future DXR intern), and a revamped config system.

games.mozilla.org

lonnen shared news about games.mozilla.org, a landing page created for Mozilla’s presence at GDC 2015. The site was developed mostly by cvan, with help from a few other people like adrian. Not only is the site itself cool, but the site’s deployment strategy is impressively simple; it’s a private Github repo that deploys to Heroku, which in turn is wrapped by CloudFront such that we only need 1 dyno to serve the site even when traffic surges.

Self-Repair Server

lonnen also shared the Self-Repair Server, which is part of an experimental system to have Firefox notice problems in the browser and proactively fix them itself. It’s similar to games.mozilla.org in that it’s end result is static data, but because it’s a public repo, we use TravisCI to deploy the content to S3, which sits behind Cloudfront.

Open-source Citizenship

Here we talk about libraries we’re maintaining and what, if anything, we need help with for them.

django-tidings 1.0 is Out

ErikRose released django-tidings 1.0, a framework for asynchronous email notifications from Django. The new release supports Django 1.6 (and possibly up), includes a tox config for local testing against multiple Django versions, and more!

pyelasticsearch 1.1

Next ErikRose shared news about the 1.1 release of pyelasticsearch, which is a Python API to Elasticsearch. 1.1 comes with a new bulk API for bulk operations (such as index, update, create, and delete) as well as improved documentation.

New Hires / Interns / Volunteers / Contributors

Here we introduce any newcomers to the Webdev group, including new employees, interns, volunteers, or any other form of contributor.

Name Role Work
lismanb Volunteer 3 bugs closed on Bedrock thusfar. Congrats!
lgp171188 Volunteer 25+ pull requests on Input, fixing Vagrant provisioning and the “new contributor” experience. Nice!
lcamacho Volunteer 19 commits on airmozilla, including docker+fig and selenium tests using Django LiveServer. Awesome!

Roundtable

The Roundtable is the home for discussions that don’t fit anywhere else. And this week, the Roundtable is empty!


If you want to know what precious metals, water reservoirs, or grave sites may be hiding on your land, send two Bitcoins to our address and we’ll be in touch.

If you’re interested in web development at Mozilla, or want to attend next month’s Extravaganza, subscribe to the dev-webdev@lists.mozilla.org mailing list to be notified of the next meeting, and maybe send a message introducing yourself. We’d love to meet you!

See you next month!

https://blog.mozilla.org/webdev/2015/03/09/webdev-extravaganza-march-6th/


Ben Kelly: Initial Cache API Lands in Nightly

Понедельник, 09 Марта 2015 г. 18:30 + в цитатник

Its been two busy weeks since the last Service Worker build and a lot has happened. The first version of the Cache API has landed in Nightly along with many other improvements and fixes.

The Cache landing is nice because it was the largest set of patches blocking users from testing directly in Nightly. Finally getting it into the tree brings us much closer to the point where we don’t need these custom builds any more.

We’re not there yet, though. The custom builds will still be needed until the following two issues are fixed:

  • Cache.put() current does not work. In order to fix this we must integrate Cache with the CrossProcessPipe. These patches have been in the custom builds from the start, but we must complete the work in order for most Service Worker sites to be usable on Nightly. | bug 1110814
  • Service Worker scripts and their dependencies are not currently saved for offline access. Obviously, we must fix this in order for Service Workers to provide true offline support. This feature is in progress, but unfortunately is not in the custom build yet. | bug 931249

Once these two bugs are fixed we will begin encouraging the community to test with Nightly directly.

This week’s build was updated as of yesterday, March 8:

Firefox Service Worker Builds

This build includes the following feature changes in Nightly:

  • Cache API | bug 940273
  • FetchDriver channel stalls when Service Worker returns from fetch event too early | bug 1130803
  • remove Service Worker Cache “prefixMatch” option | bug 1130452
  • ServiceWorkerGlobalScope.close() should throw InvalidAccessError | bug 1131353
  • ServiceWorkerClients API spec changes | bug 1058311
  • Remove ServiceWorkerGlobalScope.scope | bug 1132673
  • ServiceWorker: client.postMessage should be dispatched to navigator.serviceWorker.onmessage | bug 1136467

It also includes these bug fixes:

  • navigator.serviceWorker.controller does not track underlying state | bug 1131882
  • Fix registration persistence in some activation cases | bug 1131874
  • Don’t persist registrations that fail | bug 1132141
  • FetchDriver should check content load policy before proceeding | bug 1139665
  • Use correct principal for channel which updates ServiceWorker | bug 1137419
  • Seg Fault when calling cache.matchAll without parameters | bug 1138916
  • Crash in ActorChild::SetFeature | bug 1140065
  • Fix -Winconsistent-missing-override warnings introduced in Cache API | bug 1139603
  • disallow creating nested workers from ServiceWorker | bug 1137398

Finally, a number of testing changes were made:

  • Replace getServiced() with matchAll() in a couple of ServiceWorker tests | bug 1137477
  • Various ServiceWorker test fixes | bug 1139561
  • Remove activatingWorker warning in ServiceWorkerManager | bug 1139990
  • Remove a couple of unused test functions on ServiceWorkerContainer | bug 1140120
  • nice to have a test-interfaces.html for ServiceWorkers | bug 1137816

Many thanks to all the contributors:

Please let us know if you find any new issues. Thank you!

https://blog.wanderview.com/blog/2015/03/09/initial-cache-api-lands-in-nightly/


Christian Heilmann: Advancing JavaScript without breaking the web

Понедельник, 09 Марта 2015 г. 17:52 + в цитатник

Current advancements in ECMAScript are a great opportunity, but also a challenge for the web. Whilst adding new, important features we’re also running the danger of breaking backwards compatibility.

These are my notes for a talk I gave at the MunichJS meetup last week. You can see the slides on Slideshare and watch a screencast on YouTube. There will also be a recording of the talk available once the organisers are done with post-production.

JavaScript – the leatherman of the web

rainbow unicorn kittenAccurate visualisation of the versatility of JavaScript

JavaScript is an excellent tool, made for the web. It is incredibly flexible, light-weight, has a low learning threshold and a simple implementation mechanism. You add a SCRIPT element to an HTML document, include some JS directly or link to a JS file and you are off to the races.

JavaScript needs no compilation step, and is independent of IDE of development environment. You can write it in any text editor, be it Notepad, VI, Sublime Text, Atom, Brackets or even using complex IDEs like Eclipse, Visual Studio or whatever else you use to put text into a file.

JavaScript – the enabler of new developers

JavaScript doesn’t force you to write organised code. From a syntax point of view and when it comes to type safety and memory allocation it is an utter mess. This made JavaScript the success it is now. It is a language used in client environments like browsers and apps. For example you can script illustrator and Photoshop with JavaScript and you can now also automate OSX with it. Using node or io you can use JavaScript server-side and write APIs and bespoke servers. You can even run JS directly on hardware.

The forgivefulness of JS is what made it the fast growing success it is. It allows people to write quick and dirty things and get a great feeling of accomplishment. It drives a fast-release economy of products. PHP did the same thing server-side when it came out. It is a templating language that grew into a programming language because people used it that way and it was easier to implement than Perl or Java at that time.

JavaScript broke with conventions and challenged existing best practices. It didn’t follow an object orientated approach and its prototypical nature and scope trickery can make it look like a terribly designed hack to people who come from an OO world. It can also make it a very confusing construct of callbacks and anonymous functions to people who come from it from CSS or the design world.

But here’s the thing: every one of these people is cordially invited to write JavaScript – for better or worse.

JavaScript is here to stay

The big success of JavaScript amongst developers is that it was available in every browser since we moved on from Lynx. It is out there and people use it and – in many cases – rely on it. This is dangerous, and I will come back to this later, but it is a fact we have to live with.

As it is with everything that is distributed on the web once, there is no way to get rid of it again. We also can not dictate our users to use a different browser that supports another language or runtime we prefer. The fundamental truth of the web is that the user controls the experience. That’s what makes the web work: you write your code for the Silicon Valley dweller on a 8 core state-of-the-art mobile device with an evergreen and very capable browser on a fast wireless connection and much money to spend. The same code, however, should work for the person who saved up their money to have a half hour in an internet cafe in an emerging country on a Windows XP machine with an old Firefox connected with a very slow and flaky connection. Or the person whose physical condition makes them incapable to see, speak, hear or use a mouse.

Our job is not to tell that person off to keep up with the times and upgrade their hardware. Our job is to use our smarts to write intelligent solutions. Intelligent solutions that test which of their parts can execute and only give those to that person. Web technologies are designed to be flexible and adaptive, and if we don’t understand that, we shouldn’t pretend that we are web developers.

The web is a distributed system of many different consumers. This makes it a very hostile development environment, as you need to prepare for a lot of breakage and unknowns. It also makes it the platform to reach much more people than any – more defined and closed – environment could ever reach. It is also the one that allows the next consumers to get to us. It’s hardware independence means people don’t have to wait for availability of devices. All they need is something that speaks HTTP.

New challenges for JavaScript

This is all fine and well, but we reached a point in the evolution of the web where we use JavaScript to such an extend that we need to start to organise it better. It is possible to hack together large applications and even server-side solutions in JavaScript, but in order to control and maintain them we need to consider writing cleaner JavaScript and be more methodical in our approach. We could invent new ways of using it. There is no shortage of that happening, seeing that there are new JavaScript frameworks and libraries published almost weekly. Or, we could try to tweak the language itself to play more by rules that have proven themselves over decades in other languages.

In other words, we need JavaScript to be less forgiving. This means we will lose some of the first-time users as stricter rules are less fun to follow. It also means though that people coming from other, higher and more defined languages can start using it without having to re-educate themselves. Seeing that there is a need for more JavaScript developers than the job market can deliver, this doesn’t sound like a bad idea.

JavaScript – the confused layer of the web

Whilst JS is a great solution to making the web respond more immediately to our users, it is also very different to the other players like markup and style sheets. Both of these are built to be forgiving without stopping execution when encountering an error.

A browser that encounters a unknown element shrugs, doesn’t do anything to it and moves on in the DOM to the next element it does understand and knows what to do with. The HTML5 parser encountering an unclosed element or a wrongly nested element will fix these issues under the hood and move on turning the DOM into an object collection and a visual display.

A CSS parser encountering a line with a syntax error or a selector it doesn’t understand skips that instruction and moves on to the next line. This is why we can use browser-prefixed selectors like – webkit – gradient without having to test if the browser really is WebKit.

JavaScript isn’t that way. When a script encounters a syntax error or you try to access a method, object or property that doesn’t exist it stops executing and throws an error. This makes sense seeing that JavaScript is much more powerful than the others and even can replace them. You are perfectly able to create a web product with single empty BODY element and let JavaScript do the rest.

JavaScript – playing it safe by enhancing progressively

This makes JavaScript a less reliable technology than the other two. We punish our end users for mistakes made by us, the developers. Ironically, this is exactly the reason why whe shunned XHTML and defined HTML5 as its successor.

Many things can go wrong when you rely on JavaScript and end users deliberately turning it off is a ridiculously small part of that. That’s why it is good practice to not rely on JavaScript, but instead test for it and enhance a markup and page reload based solution when and if the browser was able to load and execute our script. This is called progressive enhancement and it has been around for a long time. We even use it in the physical world.

When you build a house and the only way to get to the higher floors is a lift, you broke the house when the lift stops working. If you have stairs to also get up there, the house still functions. Of course, people need to put more effort in to get up and it is not as convenient. But it is possible. We even have moving stairs called escalators that give us convenience and a fall-back option. A broken down escalator is a set of stairs.

Our code can work the same. We build stairs and we move them when the browser can execute our JavaScript and we didn’t make any mistakes. We can even add a lift later if we wanted to, but once we built the stairs, we don’t need to worry about them any more. Those will work – even when everything else fails.

JavaScript – setting a sensible baseline

The simplest way to ensure our scripts work is to test for capabilities of the environment. We can achieve that with a very simple IF statement. By using properties and objects of newer browsers this means we can block out those we don’t want to support any longer. As we created an HTML/Server solution to support those, this is totally acceptable and a very good idea.

There is no point punishing us as developers by having to test in browsers used by a very small percentage of our users and that aren’t even available on our current operating systems any longer. By not giving these browsers any JavaScript we have them covered. We don’t bother them with functionality the hardware they run on is most likely not capable to support anyways.

The developers in the BBC call this “cutting the mustard” and published a few articles on it. The current test used to not bother old browsers is this:

if ('querySelector' in document &&
    'localStorage' in window &&
    'addEventListener' in window) {
  // Capable browser. 
  // Let's add JavaScript functionality
}

Recently, Jake Archibald of Google found an even shorter version to use:

if ('visibilityState' in document) {
  // Capable browser. 
  // Let's add JavaScript functionality
}

This prevents JavaScript to run in Internet Explorer older than 10 and Android browsers based on WebKit. It is also extensible to other upcoming technologies in browsers and simple to tweak to your needs:

if ('visibilityState' in document) {
  // Capable browser. 
  // Let's load JavaScript
  if ('serviceWorker' in navigator) {
    // Let's add offline support
    navigator.serviceWorker.register('sw.js', {
      scope: './'
    });
  }
}

This, however, fails to work when we start changing the language itself.

Strict mode – giving JavaScript new powers

In order to make JavaScript safer and cleaner, we needed its parser to be less forgiving. To ensure we don’t break the web by flagging up all the mistakes developers did in the past, we needed to find a way to opt-in to these stricter parsing rules.

A rather ingenious way of doing that was to add the “use strict” parser instruction. This meant that we needed to preceed our scripts with a simple string followed by a semicolon. For example, the following JavaScript doesn’t cause an error in a browser:

x = 0;

The lenient parser doesn’t care that the variable x wasn’t initiated, it just sees a new x and defines it. If you use strict mode, the browser doesn’t keep as calm about this:

'use strict';
x = 0;

In Firefox’s console you’ll get a “ReferenceError: assignment to undeclared variable x”.

This opt-in works to advance JavaScript to a more defined and less memory consuming language. In a recent presentation Andreas Rossberg of the V8 team in Google proposed to use this to advance JavaScript to safer and cleaner versions called SaneScript and subsequently SoundScript. All of these are just proposals and – after legitimate complaints of mental health community – there is now a change to call it StrongScript. Originally the idea was to opt in to this new parser using a string called ‘use sanity’, which is cute, but also arrogant and insulting to people suffering from cognitive problems. As you can see, advancing JS isn’t easy.

ECMAScript – changing the syntax

Opting in to a new parser with a string, however, doesn’t work when we change the syntax of the language. And this is what we do right now with ECMAScript, which is touted as the future of JavaScript and covered in many a post and conference talk. For a history lesson on all of this, check out Florian Scholz’s talk at jFokus called “Whats next for JavaScript – ES6 and beyond”.

ECMAScript has a few excellent new features. You can see all of them in the detailed documentation on MDN and this post also has a good overview. It brings classes to JavaScript, sanitises scope issues, allows for template strings that span several lines and have variable replacement, adds promises, does away with the need of a lot of anonymous functions to name just a few.

It does, however, change the syntax of JavaScript and by including it into a document or putting it inside a script element in a browser that doesn’t support it, all you do is create a JavaScript error.

There is no progressive enhancement way around this issue, and an opt-in string doesn’t do the job either. In essence, we break backwards compatibility of scripting of the web. This could be not a big issue, if browsers supported ES6, but we’re not quite there yet.

ES6 support and what to do about it

The current support table of ECMAScript6 in browsers, parsers and compilers doesn’t look too encouraging. A lot is red and it is unknown in many cases if the makers of the products we rely on to run JavaScript will take the plunge.

In the case of browsers, the ECMAScript test suite to run your JavaScript engine against is publicly available on GitHub. That also means you can run your current browser against it and see how it fares.

If you want to help with the adoption of ECMAscript in browsers, please contribute to this test suite. This is the one place all of them test against and the better tests we supply, the more reliable our browsers will become.

Ways to use the upcoming ECMAScript right now

The very nature of the changes to ECMAScript make it more or less impossible to use it across browsers and other JavaScript-consuming tools. As a lot of the changes to the language are syntax errors in JavaScript and the parser is not lenient about them, we advance the language by writing erroneous code for legacy environments.

If we consider the use cases of ECMAScript, this is not that much of an issue. Many of the problems solved by the new features are either enterprise problems that only pay high dividends when you build huge projects or features needed by upcoming functionality of browsers (like, for example, promises).

The changes mostly mean that JS gets real OO features, is more memory optimised, and that it becomes a more enticing compile target for developers starting in other languages. In other words, targetted at an audience that is not likely to start writing code from scratch in a text editor, but already coming from a build environment or IDE.

That way we can convert the code to JavaScript in a build process or on the fly. This is nothing shocking these days – after all, we do the same when we convert SASS to CSS or Jade to HTML.

Quite some time ago, new languages like TypeScript got introduced that give us the functionality of ECMAScript6 now. Another tool to use is Babel.js, which even has a live editor that allows you to see what your ES6 code gets converted to in order to run in legacy environments.

Return of the type attribute?

Another way to get around the issue of browsers not supporting ECMAScript and choking on the new syntax could be to use a type attribute. Every time you add a type value to a script element the browser doesn’t understand, it skips it and doesn’t bother the script engine with its content. In the past we used this to create HTML templates and Microsoft had an own JS derivate called JScript. That one gave you much more power to use Windows internals than JavaScript.

One way to ensure that all of us could use the ECMAScript of now and tomorrow safely would be to get browsers to support a type of ‘ES’ or something similar. The question is if that is really worth it seeing the problems ECMAScript is trying to solve.

We’ve moved on in the web development world from embedding scripts and view-source to development toolchains and debugging tools in browsers. If we stick with those, switching from JavaScript to ES6 is much less of an issue than trying to get browsers to either parse or ignore syntactically wrong JavaScript.

Update: Axel Rauschmayer proposes something similar for ECMAScript modules. He proposes a MODULE element that gets a SCRIPT with type of module as a fallback for older browsers.

It doesn’t get boring

In any case, this is a good time to chime in when there are discussions about ECMAScript. We need to ensure that we are using new features when they make sense, not because they sound great. The power of the web is that everybody is invited to write code for it. There is no 100% right or wrong.

http://christianheilmann.com/2015/03/09/advancing-javascript-without-breaking-the-web/


Air Mozilla: HLS Public Stream test

Понедельник, 09 Марта 2015 г. 02:00 + в цитатник

Pascal Finette: Link Pack (March 8th)

Воскресенье, 08 Марта 2015 г. 11:00 + в цитатник

Air Mozilla: Mozilla / UNESCO salon on Digital Literacy

Пятница, 06 Марта 2015 г. 21:30 + в цитатник

Mozilla / UNESCO salon on Digital Literacy Open conversation on how to achieve universal digital literacy.

https://air.mozilla.org/mozilla-unesco-salon-on-digital-literacy/


Mozilla Release Management Team: Firefox 37 beta2 to beta3

Пятница, 06 Марта 2015 г. 20:01 + в цитатник

Once more, this beta is impressive in term of number of changesets. The highlight of this beta is the landing of the EME API. Fortunately, as they are isolated, the impact should be minimal on the release.

We also enabled the Flash protected mode.

  • 102 changesets
  • 217 files changed
  • 7009 insertions
  • 2041 deletions

ExtensionOccurrences
cpp70
h60
js15
webidl10
html9
build9
java5
css5
xul4
mn4
jsm4
xml3
ini2
inc2
in2
c2
xsl1
svg1
sjs1
py1
properties1
mm1
ipdl1
info1
idl1
conf1

ModuleOccurrences
dom128
media28
browser28
js11
mobile8
security3
toolkit2
layout2
gfx2
xulrunner1
widget1
testing1
services1
modules1

List of changesets:

Geoff BrownBug 1099175 - Skip conformance/textures/texture-npot.html on android. r=jgilbert, a=test-only - 70787b6f48c3
David MajorBug 1137050 - Don't SetThreadContext if the context didn't change. r=luke, a=lizzard - 5334c7c0b2ce
Steve WorkmanBug 1093983 - Disable type ANY request in DNS used to get TTL on Windows. r=mcmanus, a=lizzard - 082769bdd62a
Ben HearsumBug 1138924: fix win64's xulrunner mozconfig. r=rail, a=bustage - 93edd4dced6e
Jean-Yves AvenardBug 1131433 - More non-unified build fixes. a=bustage - f149caa91650
Jean-Yves AvenardBug 1138731 - Fix non-unified compilation in TextInputHandler. r=smichaud, a=bustage - 64549e948fc8
Kartikaya GuptaBug 1137952 - Call mozRequestAnimationFrame on the right window. r=mstange, a=test-only - d29e62045cc8
JW WangBug 1102852 - add MediaKeyMessageType to and remove destinationURL from MediaKeyMessageEvent. r=cpearce,bz a=lmandel ba=lmandel - c7d212eecc8e
JW WangBug 1111788 - Part 1 - include timestamps for "browser:purge-session-history" notification. r=gavin a=lmandel - 6195599f25e0
JW WangBug 1111788 - Part 2 - have GeckoMediaPluginService listen to "browser:purge-session-history" event. r=cpearce a=lmandel - 67145bce29be
JW WangBug 1111788 - Part 3 - clear nodeIds/records which are modified after the time of "clear recent history". r=cpearce. a=lmandel - 74d72da474f9
JW WangBug 1120295 - test case for "Clear Recent History" command. r=cpearce. a=lmandel - 3c5c3aa669f6
Matthew GreganBug 1124023 - Fix naming of GMPAudioDecoderCallbackProxy. r=cpearce a=lmandel - 589dc8554797
Matthew GreganBug 1124021 - Fix dangerous UniquePtr usage pattern in GMP. r=cpearce a=lmandel - 3f463a602bea
Matthew GreganBug 1122372 - Fix dangerous UniquePtr usage pattern in AudioStream. r=cpearce a=lmandel - bb90dd41c737
Edwin FloresBug 1118383 - Plug memory leak in openaes - r=cpearce a=lmandel - 5525ed289797
Edwin FloresBug 1124491 - Test HTMLMediaElement.isEncrypted attribute - r=cpearce a=lmandel - dccbd236f4f8
Edwin FloresBug 1124491 - Add HTMLMediaElement.isEncrypted attribute - r=cpearce,bz a=lmandel - 894e85d470e3
JW WangBug 1124939 - Add "individualization-request" to MediaKeyMessageType. r=bz a=lmandel - 6f83d3fe38da
Edwin FloresBug 1101304 - Handle CORS in EME - r=cpearce a=lmandel - 7503ad43a7fd
JW WangBug 1081251 - register error handlers for all media elements in EME mochitests. r=cpearce a=lmandel - 00ac75ab182f
Edwin FloresBug 1101304 - Test that EME works with CORS - r=cpearce a=lmandel - 7bc573c193ea
JW WangBug 1121332. Part 1 - add media key status to gmp-api. r=cpearce. a=lmandel - 7a0c7799b5ea
JW WangBug 1121332. Part 2 - expose media key status from CDMCaps. r=cpearce a=lmandel - ffdf11b39ebf
JW WangBug 1121332. Part 3 - export MapObject from JS. r=jorendorff. a=lmandel - e29d774c7215
JW WangBug 1121332. Part 4 - implement MediaKeySession.keyStatuses and remove MediaKeySession.getUsableKeyIds. r=bz. a=lmandel - 3d9497f46338
JW WangBug 1121332. Part 5 - update EME mochitests for webidl changes. r=cpearce a=lmandel - 4bcb6239d04b
JW WangBug 1121332. Part 6 - update testinterfaces.html. r=bz a=lmandel - 075916728a00
JW WangBug 1083658 - add "output-downscaled" to GMP. r=cpearce. a=lmandel - c4b5f9a4cc0a
Edwin FloresBug 1075199 - Import WMF decoding code from cpearce's gmp-clearkey implementation - r=cpearce a=lmandel - 9910b5a6a99f
Edwin FloresBug 1075199 - WMF decoding in ClearKey CDM - r=cpearce a=lmandel - 6cb6bddb9b9d
Edwin FloresBug 1075199 - More logging in ClearKey CDM - r=cpearce a=lmandel - 8fb0193c1399
Edwin FloresBug 1075199 - Extend lifetime of VideoHost in GMPVideoDecoderParent to stop its destruction racing with the deallocation of video buffers - r=jesup a=lmandel - ed78f124783d
Edwin FloresBug 1075199 - Output a different clearkey.info depending on platform - r=cpearce,gps a=lmandel - c197f7371955
JW WangBug 1128379 - improve error handling in MediaKeys::CreateSession. r=bz a=lmandel - 88ab5bafc85a
JW WangBug 1128389 - rename "keyschange" to "keystatuseschange" per spec change. r=bz. r=cpearce. a=lmandel - 336529d8cd1a
Chris PearceBug 1129229 - Recognize com.adobe.primetime keysystem string. r=edwin a=lmandel - 949ce3e9c42e
Matthew GreganBug 1121258 - Add a GMP PDM to allow MP4 playback via OpenH264. r=cpearce a=lmandel - f2e35a9f30a7
Matthew GreganBug 1128794 - Refactor EME decoders on top of GMP decoders. r=cpearce a=lmandel - 87bba928e233
Edwin FloresBug 1129722 - Add {Hold,Drop}JSObjects to MediaKeyStatusMap - r=jwwang,bz a=lmandel - 93c5dec5ad4b
Matthew GreganBug 1130923 - Remove some DASHDecoder remnants: RestrictedAccessMonitor and GetByteRangeForSeek. r=cpearce a=lmandel - 74fe432c68e8
Matthew GreganBug 1131340 - Avoid delegating constructors since GCC 4.6 lacks support. r=cpearce a=lmandel - d102a4ff97be
Matthew GreganBug 1131340 - Avoid template aliasing since GCC 4.6 lacks support. r=cpearce a=lmandel - e6af00cdcfe7
JW WangBug 1130906 - remove HTMLMediaElement.waitingFor for spec. changes. r=cpearce. r=bz. a=lmandel - d72d2f792d90
Edwin FloresBug 1113474 - Keep MediaKeys alive until it has resolved all of its stored promises - r=cpearce a=lmandel - c85410a124c6
Edwin FloresBug 1113474 - Release MediaKeys when cleaning up pending promises - r=jwwang a=lmandel - 706bf5c21e6d
JW WangBug 1130917 - Part 1 - disallow multiple records with the same name in GMPStorageChild::CreateRecord(). r=edwin a=lmandel - 7a78fefaf5bd
JW WangBug 1130917 - Part 2 - improve error handling of StoreData() and ReadData(). r=edwin. a=lmandel - d02a943f7351
JW WangBug 1130917 - Part 3 - fix EME gtests. r=edwin. a=lmandel - acb510bddadd
JW WangBug 1130932 - allow GMPDecryptorParent::RecvKeyStatusChanged calls after Close(). r=edwin. a=lmandel - 5ee41a13b1ee
Chris PearceBug 1131755 - Make media.eme.enabled pref enable/disable EME rather than hide/expose EME. r=bz a=lmandel - b2add82a76ce
JW WangBug 1132366 - Correct place to call MediaKeys::Release() during shutdown. r=edwin a=lmandel - 4cb81cd7b63c
JW WangBug 1132780 - Fix namespace and include files in MediaKeyStatusMap.cpp/h. r=cpearce a=lmandel - e3bf6bb9b33a
Chris PearceBug 1111160 - Dispatch observer service notifications when content succeeds or fails to get CDM access. r=bz a=lmandel - 4c7cf01583e2
Edwin FloresBug 1133370 - Remove redundant Shutdown() call in MediaDataDecoderCallbackProxy::Error() - r=kinetik a=lmandel - e4eece82fbe1
Chris PetersonBug 1133291 - Remove unused code from Clear Key's openaes. r=cpearce a=lmandel - e13431adabfd
Gijs KruitboschBug 1133583 - pass window in EME notifications instead of null subject, r=cpearce a=lmandel - 448ff154c5fd
Jacek CabanBug 1133479 - Fixed media/gmp-clearkey build on mingw. r=cpearce a=lmandel - d8e655d11fc5
Stephen PohlBug 1089867: Rename media.eme.adobe-access.enabled pref to media.gmp-eme-adobe.enabled. r=cpearce a=lmandel - 9b9911bc6bd5
Chris PearceBug 1124031 part 1 - Expose GMP version on GMPParent. r=jesup a=lmandel ba=lmandel - f0b35fc2bfbf
Chris PearceBug 1124031 part 2 - Rename EMELog.h to EMEUtils.h. r=bz a=lmandel - cee66f9d30e7
Chris PearceBug 1124031 part 3 - Parse min CDM version from EME keySystem string. r=bz a=lmandel - 16dddf827464
Chris PearceBug 1124031 part 4 - Enforce min CDM version from keySystem string. r=bz a=lmandel - 6437b406a0fa
Chris PearceBug 1137489 - Fix unified build failure in gmp-clearkey. r=edwin a=lmandel - d56acccf3b69
Chris PearceBug 1137957 - Fix non-unified build failure in GMPVideoDecoder. r=kinetik a=lmandel - a7098648876a
Jean-Yves AvenardBug 1134387: Prevent crash when decoder couldn't be created. r=edwin a=lmandel - 1ef0bf557169
Chris PearceBug 1136986 - Disable SharedDecoderManager for EME video. r=kentuckyfriedtakahe a=lmandel - 9745aeeb920c
Chris PearceBug 1136986 - Fix unthreadsafe uses of GMPVideoHost in gmp-clearkey. r=edwin a=lmandel - 1fd982ec5296
Chris PearceBug 1138240 - Fail faster if a CDM tries to resolve a resolved promise. r=edwin a=lmandel ba=lmandel - 8abdbdecd2d6
Ryan VanderMeulenBug 1120993 - Backout Bug 1125891 and Bug 1119941 to return to default settings for Flash protected mode and our internal sandbox. a=lmandel - 25f45020179b
David MajorBug 1137609 - Test for the missing export because we can't trust the version. r=glandium, a=sledru - cdd3d780401e
Makoto KatoBug 1138070 - Don't use GetModuleHandleA on RtlImageNtHeader. r=dmajor, a=sledru - 84a2cfba8deb
Margaret LeibovicBug 1130834 - Explictly cancel ongoing download notifications instead of trying to update them to be non-ongoing. r=wesj, a=lmandel - 2f6284a0d529
Benoit GirardBug 1132468 - Reject invalid sizes. r=jrmuizel, a=lizzard - 1f4073c76b2b
Masatoshi KimuraBug 1137179 - Add wildcard support to the static fallback list. r=keeler, a=lsblakk - 70d3a14eab61
Bobby HolleyBug 1137511 - Account for audio frames already pushed to audio hardware but not yet played when computing OutOfDecodedAudio. r=kinetik, a=lsblakk - 729cf69ef43f
Michael ComellaBug 1132986 - Display a Gecko-themed dialog when sending tabs to device. r=liuche, a=lmandel - 003b419b893f
Ehsan AkhgariBug 1125956 - Hack around the broken assumptions of Thunderbird about the HTML copy encoder by disabling the plaintext encoding detection logic. r=roc, a=lizzard - 41929a7c55f5
Steve FinkBug 1137326 - Fix out of bounds error in JSiterateCompartments. r=terrence, a=abillings - 10eff960b898
Steve FinkBug 1137336 - Explicitly disallow WeakMapTracer.callback from GCing. r=terrence, a=lsblakk - ea414ee32231
Boris ZbarskyBug 1135764 - Make sure XSLT transform results have a document timeline so things like transitions will work. r=smaug, a=lmandel - 5f1674957fe4
Sotaro IkedaBug 1137251 - Disable RemoveTextureFromCompositableTracker except gonk. r=nical, a=lizzard - 610aae9b5e36
Shu-yu GuoBug 1136397 - Ensure OSR frame scripts have debug instrumentation. r=jandem, a=lmandel - 666a1aafecfd
Shu-yu GuoBug 1136397 - Fix non-unified build bustage. (a=bustage) - 8be609272977
D~ao GottwaldBug 1111146 - [EME] Implement master pref for playing DRM content, including pref. r=gijs,dao - 007cc5f2f96e
Gijs KruitboschBug 1127288 - add header for DRM checkbox, fix link alignment, r=dolske - 4d6e9e4e5e87
Gijs KruitboschBug 1111147 - update nsContextMenu for EME, r=florian - 38ce715f4de4
Gijs KruitboschBug 1111148 - show doorhanger for EME being played back, r=florian - 55823773c733
Gijs KruitboschBug 1111153 - show error notifications for broken EME content (includes fix for Bug 1139022), r=florian - 0631cc897937
Gijs KruitboschBug 1131758 - indicate 64-bit windows or OSX/Linux incompatibilities for Adobe's CDM, r=dolske - 529b83aa2c7b
Gijs KruitboschBug 1135237 - update message for EME notification, r+a=dolske - 0e44d113855f
Eugen SawinBug 792992 - Refactor update service. r=snorp, a=lmandel - f9f0120c1adf
Eugen SawinBug 792992 - Make update URL configurable via pref. r=snorp, a=lmandel - ae511f0dda0f
Eugen SawinBug 792992 - Delay update service start. r=rnewman, a=lmandel - 7113cd46019c
Eugen SawinBug 792992 - Switch URL usage to URI to prevent unnecessary network calls. r=rnewman, a=lmandel - bd0696c04755
Mark HammondBug 1137459 - Avoid sensitive information in the FxA logs. r=ckarlof, a=lmandel - e969067d440d
Ehsan AkhgariBug 1125963 - Part 1: Fix serialization of the pre-wrap elements that Thunderbird relies on. r=bzbarsky, a=lmandel - 50aed8247f5c
Ehsan AkhgariBug 1125963 - Part 2: Rename mPreFormatted to mPreFormattedMail in order to clarify the meaning of this member. a=lmandel - 151a86ff6ae8
Jean-Yves AvenardBug 1138922 - Fix build bustage. r=mattwoodrow, a=lmandel - 73495389c7d6
Karl TomlinsonBug 1123492 - Update comment to describe the thread that runs AttemptSeek(). r=mattwoodrow, a=abillings - fd31f4d56ee2
Karl TomlinsonBug 1123492 - ResetDecode() on subreaders before Seek(). r=mattwoodrow, a=abillings - ad9c778e7bb8
Karl TomlinsonBug 1123492 - Remove ResetDecode() call from MediaSourceReader::AttemptSeek(). r=mattwoodrow, a=abillings - 00bad6e2ffbc

http://release.mozilla.org/statistics/37/2015/03/06/fx-37-b2-to-b3.html


Armen Zambrano: How to generate data potentially useful to a dynamically generated trychooser UI

Пятница, 06 Марта 2015 г. 19:41 + в цитатник
If you're interested on generating an up-to-date trychooser, I would love to hear from you.
adusca has helped me generate data similar to what a dynamic trychooser UI could use.
If you would like to help, please visit bug 983802 and let us know.

In order to generate the data all you have to do is:
git clone https://github.com/armenzg/mozilla_ci_tools.git
cd mozilla_ci_tools
python setup.py develop
python scripts/misc/write_tests_per_platform_graph.py

That's it! You will then have a graphs.json dictionary with some of the pieces needed. Once we have an idea on how to generate the UI and what we're missing we can modify this script.

Here's some of the output:
{
    "android": [
        "cppunit",
        "crashtest",
        "crashtest-1",
        "crashtest-2",
        "jsreftest-1",
        "jsreftest-2",
...

Here are the remaining keys:
[u'android', u'android-api-11', u'android-api-9', u'android-armv6', u'android-x86', u'emulator', u'emulator-jb', u'emulator-kk', u'linux', u'linux-pgo', u'linux32_gecko', u'linux64', u'linux64-asan', u'linux64-cc', u'linux64-mulet', u'linux64-pgo', u'linux64_gecko', u'macosx64', u'win32', u'win32-pgo', u'win64', u'win64-pgo']


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

http://feedproxy.google.com/~r/armenzg_mozilla/~3/hhOMhztR9u0/how-to-generate-data-potentially-useful.html


Mozilla Reps Community: Reps Weekly Call – March 6th 2015

Пятница, 06 Марта 2015 г. 16:00 + в цитатник

Last Thursday we had our weekly call about the Reps program, where we talk about what’s going on in the program and what Reps have been doing during the last week.

fran-mwc

Summary

  • Firefox Spring Campaign.
  • Mozilla Reps Council is on Mozilla’s leadership page now.
  • Rep of the Month.
  • Welcome George!
  • Community Education.

Detailed notes

AirMozilla video

Don’t forget to comment about this call on Discourse and we hope to see you next week!

https://blog.mozilla.org/mozillareps/2015/03/06/reps-weekly-call-march-6th-2015/


Air Mozilla: CI Retreat

Пятница, 06 Марта 2015 г. 11:30 + в цитатник

CI Retreat The Communication and Information Sector (CI) of UNESCO is organizing a staff Retreat on 6 March 2015 which will be graciously hosted by Mozilla France

https://air.mozilla.org/ci-retreat/


Daniel Stenberg: TLS in HTTP/2

Пятница, 06 Марта 2015 г. 10:46 + в цитатник

SSL padlockI’ve written the http2 explained document and I’ve done several talks about HTTP/2. I’ve gotten a lot of questions about TLS in association with HTTP/2 due to this, and I want to address some of them here.

TLS is not mandatory

In the HTTP/2 specification that has been approved and that is about to become an official RFC any day now, there is no language that mandates the use of TLS for securing the protocol. On the contrary, the spec clearly explains how to use it both in clear text (over plain TCP) as well as over TLS. TLS is not mandatory for HTTP/2.

TLS mandatory in effect

While the spec doesn’t force anyone to implement HTTP/2 over TLS but allows you to do it over clear text TCP, representatives from both the Firefox and the Chrome development teams have expressed their intents to only implement HTTP/2 over TLS. This means HTTPS:// URLs are the only ones that will enable HTTP/2 for these browsers. Internet Explorer people have expressed that they intend to also support the new protocol without TLS, but when they shipped their first test version as part of the Windows 10 tech preview, that browser also only supported HTTP/2 over TLS. As of this writing, there has been no browser released to the public that speaks clear text HTTP/2. Most existing servers only speak HTTP/2 over TLS.

The difference between what the spec allows and what browsers will provide is the key here, and browsers and all other user-agents are all allowed and expected to each select their own chosen path forward.

If you’re implementing and deploying a server for HTTP/2, you pretty much have to do it for HTTPS to get users. And your clear text implementation will not be as tested…

A valid remark would be that browsers are not the only HTTP/2 user-agents and there are several such non-browser implementations that implement the non-TLS version of the protocol, but I still believe that the browsers’ impact on this will be notable.

Stricter TLS

When opting to speak HTTP/2 over TLS, the spec mandates stricter TLS requirements than what most clients ever have enforced for normal HTTP 1.1 over TLS.

It says TLS 1.2 or later is a MUST. It forbids compression and renegotiation. It specifies fairly detailed “worst acceptable” key sizes and cipher suites. HTTP/2 will simply use safer TLS.

Another detail here is that HTTP/2 over TLS requires the use of ALPN which is a relatively new TLS extension, RFC 7301, which helps us negotiate the new HTTP version without losing valuable time or network packet round-trips.

TLS-only encourages more HTTPS

Since browsers only speak HTTP/2 over TLS (so far at least), sites that want HTTP/2 enabled must do it over HTTPS to get users. It provides a gentle pressure on sites to offer proper HTTPS. It pushes more people over to end-to-end TLS encrypted connections.

This (more HTTPS) is generally considered a good thing by me and us who are concerned about users and users’ right to privacy and right to avoid mass surveillance.

Why not mandatory TLS?

The fact that it didn’t get in the spec as mandatory was because quite simply there was never a consensus that it was a good idea for the protocol. A large enough part of the working group’s participants spoke up against the notion of mandatory TLS for HTTP/2. TLS was not mandatory before so the starting point was without mandatory TLS and we didn’t manage to get to another stand-point.

When I mention this in discussions with people the immediate follow-up question is…

No really, why not mandatory TLS?

The motivations why anyone would be against TLS for HTTP/2 are plentiful. Let me address the ones I hear most commonly, in an order that I think shows the importance of the arguments from those who argued them.

1. A desire to inspect HTTP traffic

looking-glassThere is a claimed “need” to inspect or intercept HTTP traffic for various reasons. Prisons, schools, anti-virus, IPR-protection, local law requirements, whatever are mentioned. The absolute requirement to cache things in a proxy is also often bundled with this, saying that you can never build a decent network on an airplane or with a satellite link etc without caching that has to be done with intercepts.

Of course, MITMing proxies that terminate SSL traffic are not even rare these days and HTTP/2 can’t do much about limiting the use of such mechanisms.

2. Think of the little ones

small-big-dogSmall devices cannot handle the extra TLS burden“. Either because of the extra CPU load that comes with TLS or because of the cert management in a billion printers/fridges/routers etc. Certificates also expire regularly and need to be updated in the field.

Of course there will be a least acceptable system performance required to do TLS decently and there will always be systems that fall below that threshold.

3. Certificates are too expensive

The price of certificates for servers are historically often brought up as an argument against TLS even it isn’t really HTTP/2 related and I don’t think it was ever an argument that was particularly strong against TLS within HTTP/2. Several CAs now offer zero-cost or very close to zero-cost certificates these days and with the upcoming efforts like letsencrypt.com, chances are it’ll become even better in the not so distant future.

pile-of-moneyRecently someone even claimed that HTTPS limits the freedom of users since you need to give personal information away (he said) in order to get a certificate for your server. This was not a price he was willing to pay apparently. This is however simply not true for the simplest kinds of certificates. For Domain Validated (DV) certificates you usually only have to prove that you “control” the domain in question in some way. Usually by being able to receive email to a specific receiver within the domain.

4. The CA system is broken

TLS of today requires a PKI system where there are trusted certificate authorities that sign certificates and this leads to a situation where all modern browsers trust several hundred CAs to do this right. I don’t think a lot of people are happy with this and believe this is the ultimate security solution. There’s a portion of the Internet that advocates for DANE (DNSSEC) to address parts of the problem, while others work on gradual band-aids like Certificate Transparency and OCSP stapling to make it suck less.

please trust me

My personal belief is that rejecting TLS on the grounds that it isn’t good enough or not perfect is a weak argument. TLS and HTTPS are the best way we currently have to secure web sites. I wouldn’t mind seeing it improved in all sorts of ways but I don’t believe running protocols clear text until we have designed and deployed the next generation secure protocol is a good idea – and I think it will take a long time (if ever) until we see a TLS replacement.

Who were against mandatory TLS?

Yeah, lots of people ask me this, but I will refrain from naming specific people or companies here since I have no plans on getting into debates with them about details and subtleties in the way I portrait their arguments. You can find them yourself if you just want to and you can most certainly make educated guesses without even doing so.

What about opportunistic security?

A text about TLS in HTTP/2 can’t be complete without mentioning this part. A lot of work in the IETF these days are going on around introducing and making sure opportunistic security is used for protocols. It was also included in the HTTP/2 draft for a while but was moved out from the core spec in the name of simplification and because it could be done anyway without being part of the spec. Also, far from everyone believes opportunistic security is a good idea. The opponents tend to say that it will hinder the adoption of “real” HTTPS for sites. I don’t believe that, but I respect that opinion because it is a guess as to how users will act just as well as my guess is they won’t act like that!

Opportunistic security for HTTP is now being pursued outside of the HTTP/2 spec and allows clients to upgrade plain TCP connections to instead do “unauthenticated TLS” connections. And yes, it should always be emphasized: with opportunistic security, there should never be a “padlock” symbol or anything that would suggest that the connection is “secure”.

Firefox supports opportunistic security for HTTP and it will be enabled by default from Firefox 37.

http://daniel.haxx.se/blog/2015/03/06/tls-in-http2/


Alex Vincent: I bought a condominium!

Пятница, 06 Марта 2015 г. 09:10 + в цитатник

It seems customary on Planet Mozilla to announce major positive events in life.  Well, I’ve just had one.  Not quite as major as “I’m a new dad”, but it’s up there.  With the help of my former employers who paid my salaries (and one successful startup, Skyfire), I closed a deal on a condominium on March 5, in Hayward, California, U.S.A.

There will be a housewarming party.  Current and former Mozillians are certainly welcome to drop by.  The date is TBA, and parking will be extremely limited, so a RSVP will be required.

I’ll write a new post with the details when I have them.

https://alexvincent.us/blog/?p=867


Adam Lofting: A ‘free’ online learning experience

Пятница, 06 Марта 2015 г. 01:56 + в цитатник

2862656849_f0fa5c78bf_oI’ve blogged about various experiences of online learning I’ve taken part in over the years and wanted to reflect on the most recent one. Coursera’s three week Introduction to Ableton Live.

Learning more about learning is one of my personal goals this year. And I find writing out loud to be useful tool in thinking. So that’s mostly the point of this.

I take these courses mostly because I like learning new things, but also because I’m interested in online learning more generally. How do you most effectively transfer knowledge, skills and motivation via the web, and/or about the web? That question is often on my mind.

Almost all of the projects I work on at Mozilla are somewhere in the education space; directly with Webmaker or Mozilla Learning Networks and tangentially in the topic of volunteer contribution. Contributing to an open source project as complex and distributed as Mozilla is a learning experience in itself, and sometimes requires specific training to even make it possible.

To further frame this particular brain dump, I’m also interested generally in the economics of the web and how this shapes user experiences, and I have strong feelings about the impact of advertising’s underlying messaging and what this does over-time when it dominates a person’s daily content intake. I’m generally wary of the word “Free”. This all gets complex when you work on the web, and even directly on advertising at times. Most of my paycheques have had some pretty direct link to the advertising world, except maybe when I was serving school dinners to very rich children – but that wasn’t my favourite job, despite it’s lack of direct societal quandaries.

Now, to the content…

If you’re like me, you will tend to read notes about a topic like ‘commerce in education’ and react negatively to some of these observations because there are many cases where those two things should be kept as far apart as possible. But I’m actually not trying to say anything negative here. These are just observations.

Observations

All roads lead to… $

$ Coursera

My online experience within the Coursera site was regularly interrupted with a modal (think popup) screen asking if I wanted to pay to enrol in the ‘Signature Track’, and get a more official certification. This is Coursera’s business model and understandably their interest. It wasn’t at all relevant to me in my life situation, as I was taking a course about how to play with fun music software in my free time. I don’t often check my own qualifications before I let myself hobby. Not that anyone checked my qualifications before they let me work either, but I digress. Coursera’s tagline says ‘free’, but they want you to pay.

$ Blend.io

All assignments for the course had to be published to Blend for peer-evalutation, Blend is like Github but for raw audio production tracks rather than source-code. I didn’t know about Blend before the course, and really like it as a concept and how it’s executed and for what it could do for collaborative music making. But I note, it is a business. This course funnels tens of thousands of new users into that business over the course of a few days. There might not be any direct financial trade here (between companies for example), but users are capital in start-up land. And I now receive emails from Blend with advertisements for commercial audio production tools. My eyeballs, like yours, have a value.

$ Berklee College of Music

While hosted on Coursera, the content of this course is by Berklee College of Music. The content they ‘give away’ would traditionally only have been available to paying students. Berklee’s business is selling seats in classes. This course isn’t given away as an act of kindness, it’s marketing. Three weeks is short and therefore the content is ‘light’. Lighter than I was expecting (not that I’m entitled). But halfway through, we receive a promotional email about Berklee’s own online education platform where you could create an account to get access to further ‘free’ videos to supplement the Coursera materials. I found these supplementary videos more useful, and they lead to offers to sign-up for extended paid courses with Berklee Online. For Berklee, this whole excercise is a marketing funnel. Quite possibly it’s the most fun and least offensive marketing funnel you can be dropped into, but it exists to do that job.

$ Erin Barra – Course professor and artist

Now, I write this with genuine sympathy, as I’ve walked the floor at countless venues trying to sell enough music and merch to cover the petrol costs of playing a gig. But this is a commercial element of this learning experience, so I will note it. At many points throughout the three weeks, we had opportunities to buy Erin’s music, t-shirts, and audio production stems (these are like a layer file of an original recording) for consumption and or remixing. I know you have to hustle if you’re making music for a living, but the observation here is that the students of this course are also a marketable audience. Perhaps only because they arrive en-mass and end up slightly faceless. I’m sure it would be weird for most teachers to sell t-shirts in a class-room. It wasn’t particularly weird online, where we’re desensitised to being constantly sold things. And I may have only noticed this because I’m interested in how all these things fit together.

$ Ableton

The course was about learning Ableton Live. A commercial audio production tool. So at some point, the cost of Ableton had to be considered. Ableton offers a free 30 day trial, which works for this course and they kindly (or sensibly) agreed to let people taking the course start a new trial even if they’d used their 30 days already. Good manners like those are good for business. Anyway, I already owned Live 9 Intro (aka the cheap version), and for a three week intro course it does more than enough to learn the basics (I guess that’s why it’s called Intro?). But the course taught and encouraged the use of Live 9 Suite (the EUR599 rather than the EUR79 version). Until some people complained, the use of features in Suite was required to complete the final assignment. Reading between the lines, I doubt there was any deliberate commercial discussion around this planning, but the planning definitely didn’t stem from the question: ‘how can we keep the cost down for these beginners?’. At the end of the course there were discount codes to get 15% off purchasing anything from Ableton. I didn’t use Suite during the course, but I’m playing with it now on my own time and terms, and may end up spending money on it soon.

Reflections

It’s wonderful, but it’s not Wikipedia. The course opened a lot of doors, but mostly into places where I could spend money, which I am cautious about as a model for learning. It was valuable to me and prompted me to learn more about Ableton Live than I would have done in those three weeks without it. So I’m grateful for it. But I can’t in my heart think of this as a ‘shared public resource’.

For my own learning, I like deadlines. Preferably arbitrary. The fact that these Coursera courses are only available at certain times during the year, really works for me. But I struggle with the logic of this when I think about how best to provide learning material online to as many people as possible. The only MOOC style courses I have finished have been time-bound. I don’t know how many people this is true for though.

People will learn X to earn Y. For me this course was a form of hobby or entertainment, but much learning has a direct commercial interest for students as well as educators. Whether it’s for professional skills development, or building some perceived CV value.

There is no ‘free’ education, even if it says “free” on the homepage. There is always a cost, financial or otherwise. Sometimes the cost is borne by the educator, and sometimes the student. Both models have a place, but I get uncomfortable when one tries to look like the other. And if the world could only have one of these models for all of education I know which one I’d choose. Marketing fills enough of our daily content and claims enough brainprint as it is.

Conclusion

I thought I might find some conclusions in writing this, but that doesn’t always happen. There are a lot of interesting threads here.

So instead of a conclusion, you can have the song I submitted for my course assignment. It was fun to make. And I have this free-but-not-free course to thank for getting it done.

http://feedproxy.google.com/~r/adamlofting/blog/~3/K_z9nsK8MaY/


Mozilla Community Ops Team: Welcome to the Mozilla Community Ops blog!

Пятница, 06 Марта 2015 г. 00:11 + в цитатник

IT Crowd -

A Re-introduction

We began as a group in late 2011 under the moniker “Mozilla Community IT” to help provide resources to our contributor community websites. We sought to answer this key question:

How can we use technology to empower the Community to promote the Mozilla Mission?

As the IT/Operations world has evolved, so have we. In the past year we’ve worked on supporting resources and projects that are more Ops-focused than traditional IT-focused.  So to better understand what we do, we have re-branded ourselves as Mozilla Community Ops.

Our Mission

We are a global team of sysadmins and ops engineers supporting the Mozilla community and, importantly, mentoring and teaching others with practical technical skills in hosting and running production websites and services, configuration management (Puppet/Chef), monitoring/alerting, and on-call pager.

Projects

Our aim, like most modern ops teams, is to provide services others can leverage that will simultaneously help lower the barrier to participation. It shouldn’t require special skills, for instance, to maintain a blog, or to launch and run a web application.

Here are a couple of our current big projects:

Discourse

Our flagship project is the Mozilla Community Discourse instance.

Discourse began as an experiment into finding communication tools better suited to Mozilla’s growing community. It has many features in common with mailing lists – and in fact can be used exclusively through email, just like a mailing list – as well as more social platforms like Google Groups or Yammer.  If you aren’t familiar with Discourse, please check it out.

A couple of the Discourse sites we are hosting:

  • https://discourse.mozilla-community.org
  • https://guides.mozilla-community.org/

We’ll also be posting about Discourse in more detail in future blog posts, including how we are building and maintaining Discourse instances in Amazon AWS.

Multi-Tenant WordPress

Building and maintaining a website/blog is one of the most challenging and time-consuming elements a local community can undertake. It also often has tangible costs. Web design, WordPress themes design, website deployment and basic server administration and operations are often barriers to entry.

Mozilla Community Ops is currently working on building out a scalable multi-tenant WordPress to manage multiple blogs, as well as make them more secure and more highly available.

We’re also working on some less flashy projects, like monitoring and system health. Keep an eye out for future blog posts on these projects, or ask us about it in our Discourse category. Our Discourse category is also the right place to go if you have any other questions about what we’re up to, or want to join in on the fun. If you want to help, just start a new topic introducing yourself and we’ll help point you in the right direction.

http://ops.mozilla-community.org/welcome-to-the-mozilla-community-ops-blog/


Armen Zambrano: mozci 0.3.0 - Support for backfilling jobs on treeherder added

Четверг, 05 Марта 2015 г. 19:19 + в цитатник
Sometime on treeherder, jobs get coalesced (a.k.a. we run the tests on the most recent revision) in order to handle load. This is good so we can catch up when many pushes are committed on a tree.

However, when a job run on the most recent code comes back failing we need to find out which revision introduced the the regression. This is when we need to backfill up to the last good run.

In this release of mozci we have added the ability to --backfill:
python scripts/trigger_range.py --buildername "b2g_ubuntu64_vm cedar debug test gaia-js-integration-5" --dry-run --revision 2dea8b3c6c91 --backfill
This should be useful specially for sheriffs.

You can start using mozci as long as you have LDAP credentials. Follow these steps to get started:
git clone https://github.com/armenzg/mozilla_ci_tools.git
python setup.py develop (or install)


Release notes

Thanks again to vaibhav1994 and adusca for their many contributions in this release.

Major changes
  • Issue #75 - Added the ability to backfill changes until last good is found
  • No need to use --repo-name anymore
  • Issue #83 - Look for request_ids from a better place
  • Add interface to get status information instead of scheduling info
Minor fixes:
  • Fixes to make livehtml documentation
  • Make determine_upstream_builder() case insensitive
      Release notes: https://github.com/armenzg/mozilla_ci_tools/releases/tag/0.3.0
      PyPi package: https://pypi.python.org/pypi/mozci/0.3.0
      Changes: https://github.com/armenzg/mozilla_ci_tools/compare/0.2.5...0.3.0




      Creative Commons License
      This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

      http://feedproxy.google.com/~r/armenzg_mozilla/~3/wsQHb816Sis/mozci-030-support-for-backfilling-jobs.html


      Mike Conley: The Joy of Coding (Episode 4)

      Четверг, 05 Марта 2015 г. 18:15 + в цитатник

      The fourth episode is up! Richard Milewski and I found the right settings to get OBS working properly on my machine, so this weeks episode is super-readable! If you’ve been annoyed with the poor resolution for past episodes, rejoice!

      In this fourth episode, I solve a few things – I clean up a busted rebase, I figure out how I’d accidentally broken Linux printing, I think through a patch to make sure it does what I need it to do, and I review some code!

      Episode Agenda

      References:
      Bug 1136855 – Print settings are not saved from print job to print job
      Notes

      Bug 1088070 – Instantiate print settings from the content process instead of the parent
      Notes

      Bug 1090448 – Make e10s printing work on Linux
      Notes

      Bug 1133577 – [e10s] “Open Link in New Tab” in remote browser causes unsafe CPOW usage warning
      Notes

      Bug 1133981 – [e10s] Stop sending unsafe CPOWs after the findbar has been closed in a remote browser
      Notes

      http://mikeconley.ca/blog/2015/03/05/the-joy-of-coding-episode-4/



      Поиск сообщений в rss_planet_mozilla
      Страницы: 472 ... 132 131 [130] 129 128 ..
      .. 1 Календарь