-Поиск по дневнику

Поиск сообщений в rss_planet_mozilla

 -Подписка по e-mail

 

 -Постоянные читатели

 -Статистика

Статистика LiveInternet.ru: показано количество хитов и посетителей
Создан: 19.06.2007
Записей:
Комментариев:
Написано: 7

Planet Mozilla





Planet Mozilla - https://planet.mozilla.org/


Добавить любой RSS - источник (включая журнал LiveJournal) в свою ленту друзей вы можете на странице синдикации.

Исходная информация - http://planet.mozilla.org/.
Данный дневник сформирован из открытого RSS-источника по адресу http://planet.mozilla.org/rss20.xml, и дополняется в соответствии с дополнением данного источника. Он может не соответствовать содержимому оригинальной страницы. Трансляция создана автоматически по запросу читателей этой RSS ленты.
По всем вопросам о работе данного сервиса обращаться со страницы контактной информации.

[Обновить трансляцию]

Christian Heilmann: Mind the gap – State of the Browser 5 was a blast

Воскресенье, 13 Сентября 2015 г. 15:15 + в цитатник

Yesterday was the fifth edition of State of the Browser in London, England. SOTB was always a special kind of event: representatives of all the browsers (and confusion about what Apple might be up to) came, gave short talks about hot new technical topics and then formed a panel to answer people’s questions.

This year, the format changed and instead there were a lot of 25 minute talks by interesting people, regardless of who they work for. There was also no panel.

SOTB still is a lovely, friendly and very affordable event bang in the centre of London and thus very accessible. The organisers also do a monthly meetup and are just good eggs all around.

As there is no space in my tiny new flat I had donated all the swag, books, stickers bags and shirts I collected over the last 10 years to the event and it made an impressive stash. People took the lot (except for a German CSS book).

sotb-swag

All talks are recorded and streamed live and the videos are trickling in on the organiser’s Vimeo feed.

The line-up was great and full of new faces on the speaking circuit, some of which gave their first talks. Here’s my quick notes (talks in chronological order):

  • Seb Lee-Delisle’s “Being grown-up doesn’t have to be boring” was supposed to be a round-up of all his creative projects using laser projection but ended up being situational comedy as none of his videos worked. Yet he explained and entertained and made the audience clap and wave to control sounds and flappy bird animations using kinnect so everybody was happy. Make sure to spot Bruce Lawson and me out-dancing everybody else in the first row in the video.
  • Edd Sowden’s “Accessibility and how to get the most from your screenreader” pivoted into “What U doing with data tables, browsers” and was insightful, very well researched and above all – funny to watch. Edd has a wonderful dead-pan delivery of weird browser workings. Well worth your time to see this video and check his slides.
  • Melinda Seckington’s “Learn. Reflect. Repeat – How We Run Internal Hackdays and Other Events” talked about her overcoming her impostor syndrome and how to teach people and learn from one another. You can see her in the video here.
  • Martin Jakl’s debut talk “Another billion browsers and Internet of Things” kind of went past me as I was preparing for my own talk following his. He did a look back at the problems of WAP and early mobile tech and showed how IoT will mean for us to care much more about low-end memory devices again.
  • My own talk is posted further on here. So far, here are the slides and a screencast.
  • Bruce Lawson’s “Ensuring a Performant Web for the Next Billion People” was a great round-up of opportunities in emerging markets and a reminder what this means to our products. Bruce’s slides are here – some great re-usable information in there.
  • Laura Elizabeth’s “From Pages to Patterns: How Pattern Libraries are Changing the Face of the Web” was my big surprise for the day. A first time speaker and pretty nervous she delivered an absolutely delightful talk about using pattern libraries and making them work for your clients. Well organised, well researched and delivered with a lot of confidence. If you’re looking for a design-oriented presenter with lots of understanding for development needs, Laura is someone fresh and new to consider.
  • Adam Onishi’s “Best viewed with…” was a trip down memory lane how we did things wrong about browser support and progressive enhancement and how we’re repeating these mistakes. Very well argued presentation with a very confident and interesting delivery.
  • Ada Rose Edwards’ “Animation Performance on the Web” was a whirl-wind tour of making animation perform including pretty far out ideas like using shaders to plough through a lot of data without slowing down the main thread. Her slides are here and they link to all the demos she showed. Ada is right now my go-to JavaScript presenter to tell conferences about, so expect more epicness to come from her.
  • Phil Nash’s “The web is getting pushy” once again proved his utter disregard for browsers not doing the things he wants and the horrors of live coding. Clever, well paced and good talk about live updates and notifications on the web.

The Twitter coverage of the event is extensive and still ongoing so be sure to check out the #sotb5 hashtag for more stuff trickling in.

My talk was a quick preview for a longer one I am working on bemoaning and explaining the gap I see between what we advocate as “common knowledge” at events like these and what I see people building on the web. We are a bubble inside a bubble and it is time to burst out and bring the great information we are already getting bored of to those who mess with the web. The slides are here and I recorded a screencast if you want to keep the context. I’m looking forward to the video.

All in all SOTB is well worth your time and money. If you also live by the river, make sure to attend the London Webstandards meetups.

It is a bit of a shame that the format changed, I kind of miss the focus on browsers and wished someone else would take that on or we’ll organise ourselves into monthly hangouts. I’m working on some ideas around this.

http://christianheilmann.com/2015/09/13/mind-the-gap-state-of-the-browser-5-was-a-blast/


Julien Pag`es: mozregression updates!

Суббота, 12 Сентября 2015 г. 13:23 + в цитатник

A lot of new things have been added in mozregression recently, and I think this deserve a blog post.

  • I released mozregression 1.0.0! Plenty of new cool stuff in there, the ability to launch a single build, to choose the build to test after a skipped build (allowing to go back faster in the good/bad perimeter) and other goodies. Well, just try it!
  • A new release for the GUI interface, 0.4.0! So here again, new cool features and a lot of bug fixes. For example, new releases are automatically checked so it will be easy to know when updates are available.
  • mozregression command line is now integrated as a mach command for mozilla developers! You can try “./mach mozregression -h”. I will probably send a mail on dev-platform about that.

Well, big thanks for MikeLing and Jonathan Pigree for their great work on those tools! They are an important part of the team, helping with discussions and patches. Oh, and also a big thanks to users who report bugs and make great proposals: Elbart, Ray Satiro, arni2033, Jukka Jyl"anki, and all others!


https://parkouss.wordpress.com/2015/09/12/mozregression-updates/


Jeff Walden: Quote of the day

Суббота, 12 Сентября 2015 г. 09:11 + в цитатник

During at least five of the passengers’ phone calls, information was shared about the attacks that had occurred earlier that morning at the World Trade Center. Five calls described the intent of passengers and surviving crew members to revolt against the hijackers. According to one call, they voted on whether to rush the terrorists in an attempt to retake the plane. They decided, and acted.

At 9:57, the passenger assault began. Several passengers had terminated phone calls with loved ones in order to join the revolt. One of the callers ended her message as follows: “Everyone’s running up to first class. I’ve got to go. Bye.”

The cockpit voice recorder captured the sounds of the passenger assault muffled by the intervening cockpit door. Some family members who listened to the recording report that they can hear the voice of a loved one among the din. We cannot identify whose voices can be heard. But the assault was sustained.

In response, Jarrah immediately began to roll the airplane to the left and right, attempting to knock the passengers off balance. At 9:58:57, Jarrah told another hijacker in the cockpit to block the door. Jarrah continued to roll the airplane sharply left and right, but the assault continued. At 9:59:52, Jarrah changed tactics and pitched the nose of the airplane up and down to disrupt the assault.The recorder captured the sounds of loud thumps, crashes, shouts, and breaking glasses and plates. At 10:00:03, Jarrah stabilized the airplane.

Five seconds later, Jarrah asked,“Is that it? Shall we finish it off?” A hijacker responded, “No. Not yet. When they all come, we finish it off.” The sounds of fighting continued outside the cockpit. Again, Jarrah pitched the nose of the aircraft up and down. At 10:00:26, a passenger in the background said, “In the cockpit. If we don’t we’ll die!” Sixteen seconds later, a passenger yelled, “Roll it!” Jarrah stopped the violent maneuvers at about 10:01:00 and said, “Allah is the greatest! Allah is the greatest!” He then asked another hijacker in the cockpit, “Is that it? I mean, shall we put it down?” to which the other replied, “Yes, put it in it, and pull it down.”

The passengers continued their assault and at 10:02:23, a hijacker said, “Pull it down! Pull it down!” The hijackers remained at the controls but must have judged that the passengers were only seconds from overcoming them. The airplane headed down; the control wheel was turned hard to the right. The airplane rolled onto its back, and one of the hijackers began shouting “Allah is the greatest. Allah is the greatest.” With the sounds of the passenger counterattack continuing, the aircraft plowed into an empty field in Shanksville, Pennsylvania, at 580 miles per hour, about 20 minutes’ flying time from Washington, D.C.

Jarrah’s objective was to crash his airliner into symbols of the American Republic, the Capitol or the White House. He was defeated by the alerted, unarmed passengers of United 93.

http://whereswalden.com/2015/09/11/quote-of-the-day-3/


Hannah Kane: Our team is growing! + News and Updates

Суббота, 12 Сентября 2015 г. 02:43 + в цитатник

It’s been a really exciting couple of weeks!

First of all, we welcomed two new members to the MLN Product team. Long time Mofo Pomax has taken the lead on transitioning X-Ray Goggles to a new home and Kristina, our newest designer, has had a productive first couple of weeks, focusing on creating wireframes for our badging platform. Welcome to the MLN product team!

Vidyo weirdness with the new team! Why is Kristina covered by Sabrina?

Vidyo weirdness with the new team! Why is Kristina covered by Sabrina?

Second, we released the new Thimble! It’s been really well-received (my favorite example is this Polish article – I ran it through a translator), and feels like a major step up from the last version. Be sure to watch Humph’s demo from last Friday. This release represents months of work. Read about it here.

“New Thimble is Fantastic” Poland loves Thimble!

We also added the care and feeding of the MozFest site and schedule app to our list of projects. Mavis wrote a script to export the 400+ session proposals into a github repo, where event staff and Space Wranglers are now working their magic to create an amazing program. Also in MozFestLand, Ryan Pitts spun up a version of the SRCCON schedule app for MozFest, and it works beautifully. We have some UI and UX changes to make (to make it more MozFest-y), but this will be a great resource for attendees.

Sneak Peek of the MozFest schedule app

Sneak Peek of the MozFest schedule app

On the teach.mozilla.org front, Sabrina has made major headway on the design for the MLN Directory. We’ve now got streamlined, mobile wireframes for the member profile and the Clubs page.

For editing your Club page on the go!

For editing your Club page on the go!

Mavis installed Optimize.ly which allows us to run A/B tests on teach.mozilla.org. Our first test compared the control homepage, which has three CTAs, to a variation which has a single CTA (Pledge to Teach the Web). So far, the single CTA does not seem to create a dramatic difference in the number of people taking the pledge. We’re plotting two more variations for the next heartbeat, to see if we can increase our conversion rate.

Here's the control version with three CTAs

Here’s the control version with three CTAs

And here's the single CTA variation, which so far, has not increased conversions.

And here’s the single CTA variation, which so far, has not increased conversions.

Finally, I just wanted to provide an updated list of documentation and links to things we’re working on:


http://hannahgrams.com/2015/09/11/our-team-is-growing-news-and-updates/


Chris Cooper: RelEng & RelOps Weekly highlights - September 11, 2015

Суббота, 12 Сентября 2015 г. 02:21 + в цитатник

Not sure how it was for you, but that was a deceptively busy week.

Modernize infrastructure: Amy created a new OS X 10.10.5 deployment procedure and installed the first 64 of our 200 new mac minis (for Firefox/Thunderbird testing). Further work needs to be done to validate the move to new hardware and upgrade to 10.10.5 and to rebase the timing tests.

Jonas rolled out support for remote signature validation and auth.taskcluster.net.

Jordan is working on adding some Android variant builds to TaskCluster (TC). As part of that process, he’s also documenting his efforts to create a HOWTO for devs so they can self-serve in TC in the future.

Ted hooked up cross-compiled Mac builds running in TC to try. This is the first step to moving Mac build load off of physical hardware. This is huge. (https://bugzil.la/1197154)

Improve CI pipeline: Our intern, Anthony, gave his end-of internship presentation on Thursday with details about the various improvements in made to TC over the summer. If you missed it, you can watch it over on air.mozilla.org: https://air.mozilla.org/anthony_miyaguchi/

Release: Firefox 41.0 beta 9 is in the pipe this week, along with Thunderbird 41.0 beta 1 (build #2).

Operational: Amy tracked down a bunch of configuration warnings on our puppet servers, filed bugs to get them fixed, and set up some notifications from our log hosts so that we learn about such known problems within 10 minutes.

Greg is rolling out a change to taskcluster-vcs to reduce parallelization for “repo”, and hopefully improve TaskCluster’s behavior relative to git.mo when 500s are thrown. So far, performance changes appear to be a wash, with some jobs taking slightly longer and others slowing.

Some faulty puppet changes this week caused tree closures on two separate days: the initial landing caused all POSIX systems to loop indefinitely in runner, and then that same change propagated into the new AMIs for spot instance the next day. Morgan has been working on a way to do tiered roll-outs of new AMIs using “canary” instances to avoid this kind of cascade puppet failure in the future: https://bugzil.la/1146341

See you next week!

http://coopcoopbware.tumblr.com/post/128877032870


Morgan Phillips: TaskCluster GitHub Has Landed

Суббота, 12 Сентября 2015 г. 00:01 + в цитатник
TaskCluster based CI has landed for Mozilla developers. One can begin using the system today by simply dropping a .taskcluster.yml file into the base of their repository. For an example configuration file, and other documentation please see: http://docs.taskcluster.net/services/taskcluster-github/

To get started ASAP steal this config file and replace npm install . && npm test section with whatever commands will run your project's test suite. :)

TC-GH ci is live: now any Mozillian can hook into TaskCluster by placing a single config file in their repo #DocsSoon pic.twitter.com/nFWLGF3Vzb

— Morgan Phillips (@mrrrgn) August 26, 2015

http://linux-poetry.com/blog/section/mozilla/24/


Air Mozilla: Webmaker Demos Sept 11 2015

Пятница, 11 Сентября 2015 г. 21:19 + в цитатник

Webmaker Demos Sept 11 2015 Webmaker Demos Friday Sept 11 2015 https://webmaker.etherpad.mozilla.org/demos

https://air.mozilla.org/webmaker-demos-sept-11-2015/


Mozilla Security Blog: Deprecating the RC4 Cipher

Пятница, 11 Сентября 2015 г. 20:08 + в цитатник

As part of our commitment to protect the privacy of our users, Mozilla will disable the insecure RC4 cipher in Firefox in late January 2016, beginning with Firefox 44. Mozilla will be taking this action in coordination with the Chrome and IE/Edge teams. If you’re a web site operator and still rely on RC4, you need to enable some other ciphers, or Firefox users will be unable to reach you.  Very few servers rely exclusively on RC4, so most users should experience minimal disruption.

The Rise and Gradual Fall of RC4

Developed in 1987 by Ron Rivest, the RC4 cipher has been a staple of cryptography for almost 30 years.  For many years, RC4 was widely used by HTTPS servers: first because it was faster than contemporary alternatives, and later because it was immune to attacks that other ciphers were vulnerable to, such as BEAST.

Over the years, however, cryptanalysis of RC4 has resulted in better and better attacks against it.  It has been known since 1995 that RC4 has certain biases that make it easier to attack.  Recently, several practical attacks against RC4-protected HTTPS sessions have been demonstrated.  This led the IETF to publish RFC 7465, which forbids the use of RC4 in TLS.

At the same time, newer ciphers such as AES-GCM have been created, which are as fast as RC4 on modern hardware, and are also immune to attacks such as BEAST.  Most web servers support these newer ciphers, and the majority of Firefox TLS transactions already use them.

Deprecation of RC4 in Firefox

Until recently, RC4 was fully supported by Firefox to maintain compatibility with older servers, but over the past year, we’ve been gradually removing support.

In Firefox 36 (released in February 2015), we took the first step by making RC4 a “fallback-only” cipher.  With that change, Firefox would first try to communicate with the server using secure ciphers, before “falling back” to RC4.  As a result, Firefox would only use RC4 if the server didn’t support anything better.  That was a major step; over the course of the following weeks, RC4 usage by Firefox dropped from around 27% of TLS transactions to less than 0.5%.

In Firefox 38 (released in May 2015), we took a further step by disabling RC4 almost entirely in our pre-release Nightly and Developer Edition products, leaving it enabled only for a small whitelist of sites.  Web developers using those products to test their sites will have already seen breakage if their site requires RC4.  Perhaps as a result of this, RC4 usage by Firefox has continued to gradually decline, to the point where it’s currently used in only 0.08% of TLS transactions.

Disabling RC4 by Default

RC4 will no longer be offered by default in TLS fallback beginning with Firefox 44, set to be released on January 26, 2016. As a result, Firefox will refuse to negotiate RC4 with web servers. We are announcing this change now in order to provide website operators with time to update their websites.

As noted above, the share of Firefox TLS communications using RC4 has fallen from approximately 27% at the end of 2014 to only .08% at present.  As such, Mozilla expects the impact from this change to be minimal and localized to a small number of websites that currently only offer RC4 and are unable to upgrade prior to January.

Mozilla maintains a set of guidelines on TLS configurations and a TLS configuration generator to assist website operators in the selecting a secure configuration for their websites. Although it is recommended that website operators remove the availability of RC4 entirely, those that require compatibility with older clients such as Internet Explorer 6 may want to continue to offer RC4.  As long as more modern ciphers suites containing AES are also available, Firefox will use those more secure options instead of RC4.

Users that would like to disable RC4 fallback prior to the January release may set the security.tls.unrestricted_rc4_fallback setting inside of about:config to false.  After that preference is set to false by default in Firefox 44, users that still require RC4 may re-enable it by setting it back to true.

https://blog.mozilla.org/security/2015/09/11/deprecating-the-rc4-cipher/


Daniel Glazman: Death of XUL-based add-ons to Firefox

Пятница, 11 Сентября 2015 г. 17:03 + в цитатник

I will not discuss here right now the big image of the message sent a few weeks ago (although my company is deeply, very deeply worried about it) but I would like instead to dive into a major technical detail that seems to me left blatantly unresolved: the XUL tree element was introduced to handle very long lists of items with performance. Add-ons touching bookmarks, lists of URLs, list of email adresses, contacts, anti-spam lists, advertisement blocking lists and more all need a performant tree solution.

As far as I am concerned, there is nothing on the html side approaching the performance of the XUL tree for very long lists. Having no replacement for it before the removal of XUL-based add-ons is only a death signal sent to all the add-ons needing the XUL tree, and almost a "we don't care about you" signal sent to the authors of these add-ons. From an ecosystem's point of view, I find it purely suicidal.

So I have a question: what's the plan for the limited number of XUL elements that have no current replacement in html for deep technical reasons?

Update: yes, there are some OSS packages to deal with trees in html, but until the XUL tree is removed from Firefox's UI, how do we deal with it if access to the XUL element is not reachable from add-ons?

http://www.glazman.org/weblog/dotclear/index.php?post/2015/09/11/Death-of-XUL-based-add-ons-to-Firefox


Daniel Stenberg: Unnecessary use of curl -X

Пятница, 11 Сентября 2015 г. 15:54 + в цитатник

I’ve grown a bit tired of the web filling up with curl command line examples showing use of superfluous -X’s. I’m putting code where my mouth is.

Starting with curl 7.45.0 (due to ship October 7th 2015), the tool will help users to understand that their use of the -X (or –request) is very often unnecessary or even downright wrong. If you specify the same method with -X that will be used anyway, and you have verbose mode enabled, curl will inform you about it and gently push you to stop doing it.

Example:

$ curl -I -XHEAD http://example.com –verbose

The option dash capital i means asking curl to issue a HEAD request. Adding -X HEAD to that command line asks for it again. This option sequence will now make curl say:

Note: Unnecessary use of -X or –request, HEAD is already inferred.

It’ll also inform the user similarly if you do -XGET on a normal fetch or -XPOST when using one of the -d options. Like this:

$ curl -v -d hello -XPOST http://example.com
Note: Unnecessary use of -X or –request, POST is already inferred.

curl will still continue to work exactly like before though, these are only informational texts that won’t alter any behaviors. Again, it only says this if verbose mode is enabled.

What -X does

When doing HTTP with curl, the -X option changes the actual method string in the HTTP request. That’s all it does. It does not change behavior accordingly. It’s the perfect option when you want to send a DELETE method or TRACE or similar that curl has no native support for and you want to send easily. You can use it to make curl send a GET with a request-body or you can use it to have the -d option work even when you want to send a PUT. All good uses.

Why superfluous -X usage is bad

I know several users out there will disagree with this. That’s also why this is only shown in verbose mode and it only says “Note:” about it. For now.

There are a few problems with the superfluous uses of -X in curl:

One of most obvious problems is that if you also tell curl to follow HTTP redirects (using -L or –location), the -X option will also be used  on the redirected-to requests which may not at all be what the server asks for and the user expected. Dropping the -X will make curl adhere to what the server asks for. And if you want to alter what method to use in a redirect, curl already have dedicated options for that named –post301, –post302 and –post303!

But even without following redirects, just throwing in an extra -X “to clarify” leads users into believing that -X has a function to serve there when it doesn’t. It leads the user to use that -X in his or her’s next command line too, which then may use redirects or something else that makes it unsuitable.

The perhaps biggest mistake you can do with -X, and one that now actually leads to curl showing a “warning”, is if you’d use -XHEAD on an ordinary command line (that isn’t using -I). Like this (I’ll display it crossed over to make it abundantly clear that this is a bad command line):

$ curl -XHEAD http://example.com/

… which will have curl act as if it sends a GET but it sends a HEAD. A response to a HEAD never has a body, although it sends the size of the body exactly like a GET response which thus mostly will lead to curl to sit there waiting for the response body to arrive when it simply won’t and it’ll hang.

Starting with this change, this is the warning it’ll show for the above command line:

Warning: Setting custom HTTP method to HEAD may not work the way you want.

http://daniel.haxx.se/blog/2015/09/11/unnecessary-use-of-curl-x/


Daniel Stenberg: http2 explained in markdown

Пятница, 11 Сентября 2015 г. 09:27 + в цитатник

http2 explainedAfter twelve  releases and over 140,000 downloads of my explanatory document “http2 explained“, I eventually did the right thing and converted the entire book over to markdown syntax and put the book up on gitbook.com.

Better output formats, now epub, MOBI, PDF and everything happens on every commit.

Better collaboration, github and regular pull requests work fine with text content instead of weird binary word processor file formats.

Easier for translators. With plain text commits to aid in tracking changes, and with the images in a separate directory etc writing and maintaining translated versions of the book should be less tedious.

I’m amazed and thrilled that we already have Chinese, Russian, French and Spanish translations and I hear news about additional languages in the pipe.

I haven’t yet decided how to do with “releases” now, as now we update everything on every push so the latest version is always available to read. Go to http://daniel.haxx.se/http2/ to find out the latest about the document and the most updated version of the document.

Thanks everyone who helps out. You’re the best!

http://daniel.haxx.se/blog/2015/09/11/http2-explained-in-markdown/


Christopher Arnold: Bluetooth LE beacons and the coming hyper-local web of the physical world

Пятница, 11 Сентября 2015 г. 05:49 + в цитатник


Philz Coffee mobile single-serving brewing truck at San Francisco Marina
Recently, my wife and I were riding bikes around Fort Mason area on the San Francisco peninsula.  Lo-and-behold my wife sees someone with a Philz coffee cup walk by.  She says to herself, “Wait a tick! There’s no Philz in this neighborhood!”  San Franciscans are tribal about their preferred coffees.  We typically know all the physical locations of our favorite roasters and brewers.  My wife knows I’m a Philz-devotee.  So seeing a Philz cup outside of its natural habitat caught her attention.  Minutes later, we ran into the new Philz truck, parked on Marina blvd.  Booyah!

Phil Jaber in the Original Phiz Coffee Shop

This is the first time I had thought about the half-life of a coffee cup in the wild.  The various coffee roasting factions demarcate their turf using the coffee cups they give visitors as a sort of viral advertising strategy.  And the radius of inspiration lasts as long as it takes for a person to consume their beverage, which may be five minutes if a person is walking and drinking at a moderate pace.  This is plenty of time for one customer to inspire Pavlovian thirst reactions in a dozen passersby.

This brings me to the emerging tech trend of the season, the use of bluetooth beacons for transmitting location signals and web content.  (See Apple iBeacon and the Google Eddystone initiatives for the nitty gritty)  We can assume that first applications of these tools will be marketing related like the coffee cups, sending signals that span from a few feet to fifty feet depending on intensity of the signal wavelength.  But one can imagine a scenario where beacons of hundreds of varieties might talk to our wearable devices or phones, without intruding on our attention, in order to sift out topics, events and messages of specific interest to us personally.  As a first step, something has to be written to be read.

Tweetie Nearby View
There have been some interesting initiatives around hyper-local web content discovery in Augmented Reality style applications.  My favorites include Yelp Monocle which spatially rendered restaurant reviews over the viewfinder of a phone's camera, Loren Brichter's Tweetie app which allowed users to point their phone in any direction within the user's proximity to see what was being tweeted there, and Shopkick app that sends audio signals to customers' phones when their phones are listening for the high-pitched signals Shopkick transmitter sent, that are beyond human auditory range.  All of these are app-specific signals.  It becomes very interesting when these kinds of strategies are done in an open fashion that doesn't require a special app to consume it.  The web itself is the best means to move this kind of use case forward.  That is exactly what is happening with this new push to leverage bluetooth.  And of course bluetooth signals decay rapidly over short distances.  So they are only relevant to people nearby, for whom content can be tailored. 

Why is the idea of the decaying signal good?  Think about the movie Chef, and the use case that the protagonist had to tweet their location and updates while they drove across the country.  Doesn't make a whole lot of sense to use a global platform for a location-specific service does it?  Great marketing film for Twitter, but a ridiculous premise.  Chefs need to talk to their communities, not the world, when publicizing today's menu.  And a web where everyone has to manually follow sources and manage inbound information meticulously is a web that will inundate our attention.  When it comes to the things that can matter to us in the tangible world, we need it to speak to us when it's relevant and shut up at other times.  Otherwise, the signal/utility of the web gets lost in the noise.

Google's innovation with the "Eddystone URL" introduces the concept of the beacon being a web server.  The URL a beacon transmits can utilize any modern browser to connect the user to a broad array of web content associated with the specific location without needing a custom application to read it.  Every smart phone in existence can render and interact with web content published in http. 

Admin view of Estimote beacon 
Beacon developer Estimote is joining the Eddystone initiative, soon to support the new URL broadcasting as part of their existing line of bluetooth beacons.  Their current SDKs allow for custom app developers to map locations and tailor apps specific to those locations.  Once Eddystone URLs are integrated they will be readable by notification management tools like Google Now and probably soon custom scanners, mobile web browsers and lock-screen apps.

Once Google exposes support of beacon recognition in Android, the adoption of bluetooth contextual beacons could become fairly mainstream in large metropolitan areas.  (It will be even better if it's done in Android Open Source Project so that Android forked initiatives like Xiaomi and Kindle Fire can benefit from the innovations and efforts of "beacon publishers".) What this could do for our use of Internet tools in daily life is a great deal of simplification of daily tasks.  We will no longer need to have an app specifically to check bus schedules or get restaurant reviews, make reservations, etc.  Those scenarios will be able to happen on demand, as needed with very little hassle for us as users.

In the coming years the companies that provide our phones, browsers and other communications tools will be innovating ways to surface and manage these content signals as they proliferate.  So it is unlikely to be something many of us will need to manage actively.  But very soon the earliest iterations of augmented reality apps will start to surface in our mobile devices in compelling new ways that will allow the physical environment around us to animate and inform us when we want it to.  And it will be easy to ignore at all other times.

One step beyond the mere receiving and sorting of signals is the concept that we might transmit our own signals to beacon receivers in our proximity one day.  Imagine the concept of Vendor Relationship Management, popularized by Doc Searls, a means of us transmitting our preferences to the outside world and having information and services tailor themselves to us.  In a world where we express our wants, needs, opinions digitally, the digital-physical world might in turn tailor messages to us without need for physical action. 

First step for this wave of innovation to be truly useful for us will be to have all the digital world's wealth of subliminal content available to us as needed, nearby.  Second step will be the discovery/revealing in a manageable way.  (This is already in process.)  Third step will be the assertion of preference through the tools the OS, apps and browsers provide.  I think this is the area that will benefit substantially from developer innovation.

http://ncubeeight.blogspot.com/2015/09/bluetooth-le-beacons-and-coming-hyper.html


Jonathan Griffin: Engineering Productivity Update, Sept 10, 2015

Пятница, 11 Сентября 2015 г. 05:33 + в цитатник

Highlights

Bugzilla: The BMO has been busy implementing security enhancements, and as a result, BMO now supports two-factor authentication.  Setting this up is easy through BMO’s Preferences page.

Treeherder: The size of the Treeherder database dropped from ~700G to around ~80G thanks to a bunch of improvements in the way we store data.  Jonathan French is working on improvements to the Sheriff Panel.  And Treeherder is now ingesting data that will be used to support Automatic Starring, a feature we expect to be live in Q4.

Perfherder and Performance: Will Lachance has published a roadmap for Perfherder, and has landed some changes that should improve Perfherder’s performance.  Talos tests on OSX 10.10 have been hidden in Perfherder because the numbers are very noisy; the reason for this is not currently known.  Meanwhile, Talos has finally landed in mozilla-central, which should make it easier to iterate on.  Thanks to our contributor Julien Pag`es for making this happen!  Joel Maher has posted a Talos update on dev.platform with many more details.

MozReview and Autoland: The web UI now uses BMO API keys; this should make logins smoother and eliminate random logouts.  Several UI improvements have been made; see full list in the “Details” section below.

Mobile Automation: Geoff Brown has landed the much-requested |mach android-emulator| command, which makes it much easier to run tests locally with an Android emulator.  Meanwhile, we’re getting closer to moving the last Talos Android tests (currently running on panda boards) to Autophone.

Developer Workflow: Our summer intern, Vaibhav Agrawal, landed support for an initial version of |mach find-test-chunk|, which can tell you which chunk a test gets run in.  This initial version supports desktop mochitest only.  Vaibhav gave an intern presentation this week, “Increasing Productivity with Automation”.  Check it out!

General Automation: James Graham has enabled web-platform-tests-e10s on try, but they’re hidden pending investigation of tests which are problematic with e10s enabled.  Joel Maher and Kim Moir in Release Engineering have tweaked our SETA coalescing, so that lower prioritized jobs are run at least every 7th push, or every hour; further increasing the coalescing window will wait until we have better automatic backfililng in place.  Meanwhile, the number of chunks of mochitest-browser-chrome has been increased from 3 to 7, with mochitest-devtools-chrome soon to follow.  This will make backfilling faster, as well as improving turnaround times on our infrastructure.

hg.mozilla.org: The bzexport and bzpost extensions have been updated to support BMO API keys.

WebDriver and Marionette: Several changes were made to the WebDriver specification, including new chapters on screen capture and user prompts and modal handling.

Bughunter: Our platform coverage now includes opt and debug builds of linux32, linux64, opt-asan-linux64, OSX 10.6, 10.8, 10.9, and windows7 32- and 64-bit.

The Details

bugzilla.mozilla.org
Treeherder
  • A number of optimizations have reduced Treeherder’s db size from ~700GB to ~80GB!
  • For those with Sheriff access, an improved layout for the Sheriff panel will appear soon on production https://tojonmz.wordpress.com/2015/09/04/layout-improvements-to-the-sheriff-panel/ with similar work planned soon for the Filter panel
  • Among other fixes and improvements, Logviewer UI more gracefully handles additional incomplete log states: unknown log steps (1193222) and expired jobs (1193222)
  • A new Help menu has been added with useful links for all users (1199078)
  • We are now storing the data required for the autostarring project. That means storing every single crash/test failure/log error line from the structured log (1182464).
Perfherder/Performance Testing
MozReview/Autoland
  • Autoland VCS interaction performance improvements
  • MozReview web UI now using BMO API keys
  • Login in smoother and faster and random logouts should be a thing of the past
  • MozReview Mercurial client extension now requires BMO API keys
  • No more defining password and/or cookie in plaintext config files
  • “Ship It” is now determined by reviewer status so random people can’t grant Ship It
  • Messaging during pushing is more clear about what is happening
  • “commitid” hidden metadata is now preserved across `hg rebase`
  • Review buttons and text links in web interface are now consolidated to reduce confusion
  • Empty Try syntax is now rejected properly
  • Try trigger button has been moved to an “Automation” drop-down
  • pull and import commands are now displayed
  • Bugzilla’s commit list is now properly ordered
TaskCluster Support
  • armenzg – Work is underway to support running Buildbot test jobs through TaskCluster
  • ted – successful Mac cross build with Breakpad last week, landing patches and fixing some fallout from Linux TC build switch to CentOS 6
Mobile Automation
Dev Workflow
  • vaibhav1994 – A basic version of find-test-chunk has landed. This will help in determining on which chunk a particular test is present in production. It works for mochitest for desktop platforms, see various options with ‘./mach find-test-chunk’
  • vaibhav1994 – –rebuild-talos option now present in trigger-bot to trigger only talos jobs a certain number of times on try.
Firefox and Media Automation
  • sydpolk – Network bandwidth limiting tests have been written; working to deploy them to Jenkins.
  • sydpolk – Streamlined Jenkins project generation based on Jenkins python API (found out about this at the Jenkins conference last week)
  • sydpolk – Migration of hardware out of MTV2 QA lab won’t happen this quarter because Network Ops people are shutting down the Phoenix data center.
  • maja_zf – mozharness script for firefox-media-tests has been refactored into scripts for running the tests in buildbot and our Jenkins instance
General Automation
  • chmanchester – psutil 3.1.1 is now installed on all test slaves as a part of running desktop unit tests. This will help our test harnesses manages subprocesses of the browser, and particularly kill them to get stacks after a hang.
  • armenzg – Firefox UI tests can now be called through a python wrapper instead of only through a python binary. This is very important since it was causing Windows UAC prompts on Release Engineering’s Windows test machines. The tests now execute well on all test platforms.
  • jgraham – web-platform-tests-e10s now running on try, but still hidden pending some investigation of tests that are only unstable with e10s active
  • SETA work is ongoing to support new platforms, tests, and jobs.  
ActiveData
  • [ekyle] Queries into nested documents pass tests, but do not scale on the large cluster; startup time is unacceptable.  Moving work to separate thread for quick startup, with the hope a complicated query will not arrive until the metadata is finished collecting
  • [ekyle] Added auto-restart on ETL machines that simply stop working (using CloudWatch); probably caused by unexpected data, which must be looked into later.
  • [ekyle] SpotManager config change for r3.* instances
hg.mozilla.org
  • Add times for replication events on push
  • Reformat pushlog messages on push to be less eye bleedy
  • bzexport and bzpost extensions now support Bugzilla API Keys
WebDriver
  • [ato] Specified element interaction commands
  • [ato] New chapter on user prompts and modal handling
  • [ato] New chapter on screen capture
  • [ato] Cookies retrieval bug fixes
  • [ato] Review of normative dependencies
  • [ato] Security chapter review
Marionette
  • Wires 0.4 has been released.
  • [ato] Deployed Marionette protocol changes, bringing Marionette closer to the WebDriver standard
  • [ato] Assorted content space commands converted to use new dispatching technique
  • [jgraham] Updated wires for protocol changes
bughunter
Now running opt, debug tinderbox builds for Linux 32 bit, 64 bit; OSX 10.6, 10.8, 10.9; Windows 7 32 bit, 64 bit; opt asan tinderbox builds for Linux 64 bit.
  • bug 1180749 Sisyphus – Django 1.8.2 support
  • bug 1185497 Sisyphus – Bughunter – use ASAN builds for Linux 64 bit workers
  • bug 1192646 Sisyphus – Bughunter – use crashloader.py to upload urls to be resubmitted
  • bug 1194290 Sisyphus – Bughunter – install gtk3 on rhel6 vms

https://jagriffin.wordpress.com/2015/09/10/engineering-productivity-update-sept-10-2015/


Air Mozilla: Intern Presentations

Пятница, 11 Сентября 2015 г. 00:00 + в цитатник

Maire Reavy: WebRTC privacy

Четверг, 10 Сентября 2015 г. 23:33 + в цитатник

N.B.: This is a personal blog post. The opinions expressed here represent my own and not those of Mozilla.

In the last few months, I’ve had many people reach out to me 1:1 because they are worried about the privacy aspects of WebRTC — largely because they heard discussions about “IP disclosure” (which sounded really scary and confusing to them), and I want to provide a coherent, higher level summary of what the real issues are and aren’t.

So with the help of my friends and colleagues at Mozilla and in the greater WebRTC community, I’m going to summarize the concerns and what Mozilla is doing about them.  The Chrome team is also addressing these concerns.

First some background for folks who are new to WebRTC and the topic of IP address gathering:

Real-time applications such as VoIP, video calls and online games work best when media flows directly between the endpoints, producing the lowest latency and the best user experience. In order to establish direct communications, WebRTC uses a technology called ICE. ICE works by collecting every IP address which might be used to reach your browser and shares this with the JS application.

Most user’s computers are behind some type of NAT/router/”home gateway”, which has an external IP address on the internet, and a Local Area Network (LAN) that your machines and devices connect to.  Each machine will have a local IP address on the LAN, which is normally not visible to external sites you connect to.  When a user connects to a site, the external IP address of their NAT is normally visible to the site.

However, aside from legitimate uses for real-time communications, sites can also use these IP addresses to fingerprint users and in some cases expose an external IP address the user didn’t expect to expose.

Who does this “exposure” affect?

A browser exposes an external IP address to each server that it contacts. Learning the local IP address of your machine on your local network (LAN) is not particularly useful information since these addresses are rarely unique: most LANs use one of a small number of private address ranges.  It adds no significant additional fingerprinting exposure — and blocking determined fingerprinting is very hard to do in a normal browser, if possible at all.  Someone may be able to use the local IP address to figure out who a user is when they are on a large network behind a NAT, but correlating that to a user’s identity typically requires access to the network logs for that NAT.

VPNs and anonymity

Some people attempt to use Virtual Private Networks (VPNs) to conceal their IP address.  (The type of VPN use typical here creates a “tunnel” for your internet traffic to the VPN provider, making it appear when you browse that you’re located wherever the VPN provider is.) A good (if extreme) example of this is someone hiding from a government. Many such users assume that using a VPN will obscure all their browsing and their real external IP address, which could be used to locate them.

However many VPN configurations don’t properly disable local interfaces, and so users of those VPNs might be surprised to learn that their real external IP addresses are exposed by ICE. This behavior isn’t new or unique to WebRTC: Flash, which is enabled in the vast majority of browsers, contains an ICE-like NAT traversal technology with similar properties.

For cases like this, we’ve added several new privacy controls for WebRTC in Firefox 42. These controls allow add-on developers to build features that give users the ability to selectively disable all or part of WebRTC, and which allow finer control over what information is exposed to JS applications, especially your IP address or addresses. None of these features are enabled by default due to the considerable cost of enabling them to most users (most of them can be also enabled via about:config).  There’s a Hacks blog post that discusses exactly how to use these.

It is important to realize that a VPN on its own is a poor system for protecting user anonymity. On top of that, many VPNs have serious flaws that can leak your address such as this IPV6 issue.

Even when a VPN is configured so that other IP addresses (interfaces) are disabled, other information about your browser or your computer can be used to reveal your identity. In general, it is not possible to defend against deanonymizing techniques like fingerprinting (see here) without taking extraordinary steps.  And if attackers can fingerprint you while you’re using the VPN, they can then match that fingerprint to browsing you do with the VPN off and trivially find your “real” external IP address (and thus know who/where you are, given the assumption they control or have access to your ISP’s logs).  This is one of several reasons a VPN alone isn’t a real safety-net for anonymization from strong attackers, like a government.

If your concern is weaker attackers (such as the NY Times), they can also use fingerprinting to infer your real external IP and likely location (and in many cases tons of information on you tied to the fingerprint – potentially including email, real name, and snail-mail addresses).

Is WebRTC dangerous to users in certain countries?

People whose physical safety relies on anonymity should not be depending on a VPN alone for that anonymity. There are a myriad of ways to fingerprint and de-anonymize VPN users (see some of the links above for details).  If there were one message that could get out to these users as a result of these debates and discussions, I hope it would be “VPNs will not protect you.  They aren’t capable of doing so by themselves.”

People at physical risk due to disclosure should be using the Tor Browser.  Advocates for these users should be encouraging this, and work to build a set of “best practices” and publish it widely.

Other related privacy features

Another privacy feature Firefox added is the ability to hide your external IP address from other users of WebRTC services you use. This feature is intended for users who are trying to avoid a specific other person finding them. They may want to avoid exposing their external IP address to the other party in a WebRTC call, since it could be used to locate them physically. For this case (and some other use cases), we’ve added a pref that forces all WebRTC connections to use relay (“TURN”) servers, so that no traffic goes directly between the two browsers (the service would still know who and where you are, but the other user would not).  You can also use existing prefs to force all traffic through a specific TURN server instead of one controlled by the website using WebRTC.

Future work

These are just the first set of changes. In coming releases, we will likely refine what controls we provide for WebRTC to balance usability and privacy.  We are working with both the W3 and IETF working groups to find better ways to address these issues.  I invite constructive suggestions on how best to do this.  Here are some proposals we’re trying to think through and flesh out:

  • Should some of these be the enabled by default in Private browsing windows?
  • Should we add a control in Customize you can drag out into the menubar which shows a list of active WebRTC RTCPeerConnections?
  • How can WebRTC be improved and leveraged to help provide secure and hard-to-block communication between users?

Users who need maximum anonymity protection will have to make some significant usability and performance sacrifices, which should probably include using a more comprehensive system, such as the Tor Browser. Firefox and other browsers designed for mainstream users are not the best choice for that set of users, but most users don’t fall into this category.  We want your help in making smart, practical choices that add value for users and give them control over their web experience without sacrificing default quality and usability.  Please send email with your suggestions to the dev-media mailing list (subscribe) or comment here.


http://mozillamediagoddess.org/2015/09/10/webrtc-privacy/


Air Mozilla: German speaking community bi-weekly meeting

Четверг, 10 Сентября 2015 г. 22:00 + в цитатник

Air Mozilla: Web QA Weekly Meeting

Четверг, 10 Сентября 2015 г. 19:00 + в цитатник

Web QA Weekly Meeting This is our weekly gathering of Mozilla'a Web QA team filled with discussion on our current and future projects, ideas, demos, and fun facts.

https://air.mozilla.org/web-qa-weekly-meeting-20150910/


Air Mozilla: Reps weekly

Четверг, 10 Сентября 2015 г. 18:00 + в цитатник

Yunier Jos'e Sosa V'azquez: Conoce los complementos destacados para septiembre

Четверг, 10 Сентября 2015 г. 17:14 + в цитатник

Lleg'o el noveno mes del a~no y luego de un receso en esta secci'on, te traemos nuevamente los complementos destacados para Firefox elegidos por los voluntarios que integran el Add-ons Members Board. Con los cambios que vendr'an con las pr'oximas versiones del panda rojo y a medidas de nuestras posibilidades, iremos actualizando los complementos en nuestra galer'ia.

Elecci'on del mes: Facebook™ Disconnect
por morni colhkher

Facebook™ Disconnect es un cortafuegos eficiente para desconectar p'aginas de terceros que realizan peticiones de Facebook y bloquea todo el tr'afico proveniente desde estos hacia la red social. Desde el bot'on ubicado en la barra de herramientas de Firefox puedes activar o desactivar f'acilmente la tarea del complemento.

Tambi'en te recomendamos: Web Developer
por chrispederick

Web Developer a~nade varias herramientas para desarrolladores al navegador para el trabajo con CSS, formularios, im'agenes, cookies y m'as. Con ella tambi'en podr'as realizar varias acciones entre las que se encuentran la validaci'on del c'odigo HTML y CSS, dimensionar la ventana a otra resoluci'on, adicionar estilos CSS, borrar todos los campos del formulario y muchas m'as.

Web-Developer-extension

Nomina tus complementos favoritos

?No sabes c'omo? S'olo tienes que enviar un correo electr'onico a la direcci'on amo-featured@mozilla.org con el nombre del complemento o el archivo de instalaci'on y los miembros evaluar'an tu recomendaci'on.

Esperamos que te hayan gustado estas extensiones y te sean 'utiles. !Nos vemos el pr'oximo mes!

Fuente: Mozilla Add-ons Blog

http://firefoxmania.uci.cu/conoce-los-complementos-destacados-para-septiembre-2015/


Armen Zambrano: The benefits of moving per-push Buildbot scheduling into the tree

Четверг, 10 Сентября 2015 г. 17:13 + в цитатник
Some of you may be aware of the Buildbot Bridge (aka BBB) work that bhearsum worked on during Q2 of this year. This system allows scheduling TaskCluster graphs for Buildbot builders. For every Buildbot job, there is a TaskCluster task that represents it.
This is very important as it will help to transition the release process piece by piece to TaskCluster without having to move large pieces of code at once to TaskCluster. You can have graphs of

I recently added to Mozilla CI tools the ability to schedule Buildbot jobs by submitting a TaskCluster graph (the BBB makes this possible).

Even though the initial work for the BBB is intended for Release tasks, I believe there are various benefits if we moved the scheduling into the tree (currently TaskCluster works like this; look for the gecko decision task in Treeherder).

To read another great blog post around try syntax and schedulling please visit ahal's post "Looking beyond Try Syntax".

NOTE: Try scheduling might not have try syntax in the future so I will not talk much about trychooser and try syntax. Read ahal's post to understand a bit more.

Benefits of in-tree scheduling:

  • Per-branch scheduling matrix can be done in-tree
    • We can define which platforms and jobs run on each tree
    • TaskCluster tasks already do this
  • Accurate Treeherder job visualization
    • Currently, jobs that run through Buildbot do not necessarily show up properly
    • Jobs run through TaskCluster show up accurately
    • This is due to some issues with how Buildbot jobs are represented in between states and the difficulty to have a way to related them
    • It could be fixed but it is not worth the effort if we're transitioning to TaskCluster
  • Control when non-green jobs are run
    • Currently on try we can't say run all unit tests jobs *but* the ones that should not run by default
    • We would save resources (do not run non-green jobs) and confusion for developers (do not have to ask why is this job non-green)
  • The try syntax parser can be done in-tree
    • This allows for improving and extending the try parser
    • Unit tests can be added
    • The parser can be tested with a push
    • try parser changes become atomic (it won't affecting all trees and can ride the trains)
  • SETA analysis can be done in-tree
    • SETA changes can become atomic (it won't affecting all trees and can ride the trains)
    • We would not need to wait on Buildbot reconfigurations for new changes to be live.
  • Per push scheduling analysis can be done in-tree
    • We currently only will schedule jobs for a specific change if files for that product are being touched (e.g. Firefox for Android for mobile/* changes)
  • PGO scheduling can be done in-tree
    • PGO scheduling changes become atomic (it won't affecting all trees and can ride the trains)
  • Environment awareness hooks (new)
    • If the trees are closed, we can teach the scheduling system to not schedule jobs until further notice
    • If we're backlogged, we can teach the scheduling system to not schedule certain platforms or to schedule a reduced set of jobs or to skip certain pushes
  • Help the transition to TaskCluster
    • Without it we would need to transition builds and associated tests to TaskCluster in one shot (not possible for Talos)
  • Deprecate Self-serve/BuildApi
    • Making changes to BuildApi is very difficult due to the lack of testing environments and set-up burden
    • Moving to the BBB will help us move away from this old system
There are various parts that will need to be in place before we can do this. Here's some that I can think of:
  • TaskCluster's big-graph scheduling
    • This is important since it will allow for the concept of coalescing to exist in TaskCluster
  • Task prioritization
    • This is important if we're to have different levels of priority for jobs on TaskCluster
    • On Buildbot we have release repositories with the highest priority and the try repo having the lowest
    • We also currently have the ability to raise/decrease task priorities through self-serve/buildapi. This is used by developers, specially on Try. to allow their jobs to be picked up sooner.
  • Treeherder to support LDAP authentication
    • It is a better security model to scheduling changes
    • If we want to move away from self-server/buildapi we need this
  • Allow test jobs to find installer and test packages
    • Currently test jobs scheduled through the BBB cannot find the Firefox installer and the 
Can you think of other benefits? Can you think of problems with this model? Are you aware of other pieces needed before moving forward to this model? Please let me know!



Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

http://feedproxy.google.com/~r/armenzg_mozilla/~3/crb0RWau-vI/the-benefits-of-moving-per-push.html



Поиск сообщений в rss_planet_mozilla
Страницы: 472 ... 195 194 [193] 192 191 ..
.. 1 Календарь