-Поиск по дневнику

Поиск сообщений в rss_planet_mozilla

 -Подписка по e-mail

 

 -Постоянные читатели

 -Статистика

Статистика LiveInternet.ru: показано количество хитов и посетителей
Создан: 19.06.2007
Записей:
Комментариев:
Написано: 7

Planet Mozilla





Planet Mozilla - https://planet.mozilla.org/


Добавить любой RSS - источник (включая журнал LiveJournal) в свою ленту друзей вы можете на странице синдикации.

Исходная информация - http://planet.mozilla.org/.
Данный дневник сформирован из открытого RSS-источника по адресу http://planet.mozilla.org/rss20.xml, и дополняется в соответствии с дополнением данного источника. Он может не соответствовать содержимому оригинальной страницы. Трансляция создана автоматически по запросу читателей этой RSS ленты.
По всем вопросам о работе данного сервиса обращаться со страницы контактной информации.

[Обновить трансляцию]

Anthony Jones: YouTube, MSE and Firefox 37

Понедельник, 23 Марта 2015 г. 04:44 + в цитатник
Being invisible is important when you build infrastructure. You don't notice what your browser does for you unless it is doing a poor job. We have been busy making Firefox video playback more robust, more asynchronous, faster and better.

You may have some memory of selecting between high and low quality on YouTube. When you switched it would stop the video and buffer the video at the new quality. Now it defaults to Auto but allows you to manually override. You may have noticed that the Auto mode doesn't stop playing when it changes quality. Nobody really noticed, but a tiny burden was lifted from users. You need to know exactly one less thing to watch videos on YouTube and many other sites.

This Auto mode is otherwise known as DASH, which stands for Dynamic Adaptive Streaming over HTTP. Flash has supported adaptive streaming for some time. In HTML5 video, DASH is supported on top of an API called MSE (Media Source Extensions). MSE allows Javascript to directly control the data going into the video element. This allows DASH to be supported in Javascript, along with some other things that I'm not going to go into.


It has taken a surprising amount of work to make this automatic. My team has been working on adding MSE to Firefox for a couple of years now as well as adding MP4 support on a number of platforms. We're finally getting to the point where it is working really well on Windows Vista and later in Firefox 37 beta. I know people will ask, so MSE is coming soon for Mac, and it is coming later for Linux and Windows XP.

Making significant changes isn't without its pain but it is great to finally see the light at the end of the tunnel. Firefox beta, developer edition and nightly users have put up with a number of teething problems. Most of them have been sorted out. I'd like to thank everyone who has submitted a crash report, written feedback or filed bugs. It has all helped us to find problems and make the video experience better.

Robustness goes further than simply fixing bugs. To make something robust it is necessary to keep simplifying the design and to create re-usable abstractions. We've switched to using a thread pool for decoding, which keeps the number of threads down. Threads use a lot of address space, which is at a premium in a 32 bit application.

We've used a promise-like abstraction to make many things non-blocking. They make chaining asynchronous operations much simpler. They're like Javascript promises, except being C++ they also guarantee you get called back on the right thread.

We're working towards getting all the complex logic on a single thread, with all the computation done in a thread pool. Putting the video playback machinery on a single thread makes it much clearer which operations are synchronous and which ones are asynchronous. It doesn't hurt performance as long as the state machine thread never blocks. In fact you get a performance win because you avoid locking and cache contention.

We're white listing MSE for YouTube at first but we are intending to roll it out to other sites soon. There are a couple of spec compliance issues that we need to resolve before we can remove the white list. Meanwhile, YouTube is looking really good in Firefox 37.



http://blog.technicaldebtcollector.com/2015/03/youtube-mse-and-firefox-37.html


Daniel Stenberg: Summing up the birthday festivities

Понедельник, 23 Марта 2015 г. 01:28 + в цитатник

I blogged about curl’s 17th birthday on March 20th 2015. I’ve done similar posts in the past and they normally pass by mostly undetected and hardly discussed. This time, something else happened.

Primarily, the blog post quickly became the single most viewed blog entry I’ve ever written – and I’ve been doing it for many many years. Already in the first day it was up, I counted more than 65,000 views.

The blog post got more comments than on any other blog post I’ve ever done. Right now they have probably stopped but there are 60 of them now, almost everyone one of them saying congratulations and/or thanks.

The posting also got discussed on both hacker news and reddit, totaling in more than 260 comments. Most of those in positive spirit.

The initial tweet I made about my blog post is the most retweeted and stared tweet I’ve ever posted. At least 87 retweets and 49 favorites (it might even grow a bit more over time). Others subsequently also tweeted the link hundreds of times. I got numerous replies and friendly call-outs on twitter saying “congrats” and “thanks” in many variations.

Spontaneously (ie not initiated or requested by me but most probably because of a comment on hacker news), I also suddenly started to get donations from the curl web site’s donation web page (to paypal). Within 24 hours from my post, I had received 35 donations from friendly fans who donated a total sum of  445 USD. A quick count revealed that the total number of donations ever through the history of curl’s lifetime was 43 before this day. In one day we had basically gotten as many as we had gotten the first 17 years.

Interesting data from this donation “race”: I got donations varying from 1 USD (yes one dollar) to 50 USD and the average donation was then 12.7 USD.

Let me end this summary by thanking everyone who in various ways made the curl birthday extra fun by being nice and friendly and some even donating some of their hard earned money. I am honestly touched by the attention and all the warmth and positiveness. Thank you for proving internet comments can be this good!

http://daniel.haxx.se/blog/2015/03/22/summing-up-the-birthday-festivities/


John O'Duinn: “The Race for Space” by Public Service Broadcasting

Понедельник, 23 Марта 2015 г. 00:57 + в цитатник

I was happily surprised by this as a gift recently.

For me, the intermixing of old original broadcasts with original composition music worked well as an idea. Choosing which broadcasts to include was just as important as composing the right music.

I liked how the composers framed the album around 9 pivotal moments events from 1957 (launch of sputnik) to 1972 (Apollo 17, the last Apollo departing the moon). Obviously, there was a lot of broadcasts to choose from, and I liked their choices – some of which I’d never heard before. Kennedy’s “We choose to go to the moon” speech, a homage to Valentina Tereshkova (the first female in space), Apollo 8’s “see you on the flip side” (the earthrise photo taken by Apollo 8 is still one of my favourites), and the tense interactions of all the ground + flights teams in the final seconds of descent to land of Apollo 11 (including handling the 1202, 1201 errors!).

All heady stuff and well worth a listen.

http://oduinn.com/blog/2015/03/22/the-race-for-space-by-public-service-broadcasting/


Chris Pearce: Replacing Lenovo optical drive with second hard drive: The Lenovo adapter is disappointing

Воскресенье, 22 Марта 2015 г. 11:38 + в цитатник
I recently ordered a Lenovo Serial ATA Hard Drive Bay Adapter III for my Lenovo T510 laptop. This can hold a hard drive, and replaces the DVD/CD-ROM drive in your laptop. This enables your laptop to run a second hard drive.

I've used my optical drive two, maybe three times since getting the laptop, so swapping it for another hard drive seems like a good trade for me.

The Lenovo drive bay itself works fine, but I'm still disappointed in Lenovo's product.

When installed, the drive bay looks like this:
Lenovo Serial ATA Hard Drive Bay Adapter installed in a Lenovo T510
The problem here is that there's a gap of approximately 3mm (~0.12 inches) between the top of the drive bay and the ceiling of the optical disk cavity. This means the drive bay can wobble vertically, so much so that I feel the need to tape it in place to stop it flopping around. This looks ridiculous.

Secondly, in order to install your hard drive inside the Lenovo drive bay, you need a hard drive cover. This is the metal cover that encases the hard drives shipping in Lenovo laptops. The covers normally have rubber bumpers/rails to stop the drive moving around. You need to take the bumpers off to install your drive into the hard drive bay.

The hard drive covers looks like this:

Lenovo hard drive cover

And with a hard drive in it, the hard drive cover looks like this:

Lenovo hard drive cover encasing a hard drive.

Note the screws. The drive bay has notches which the screws snap into, holding the drive securely inside the drive bay. Possibly the screws are the only important bit here; you probably don't actually need the drive cover to install the drive into the bay, just the screws since they're what hold the drive in place inside the drive bay.

The frustrating thing is that nothing on the Lenovo web site tells you that you need a drive cover to install a drive into the drive bay. My drive bay arrived and I had to loot my old laptop's drive cover in order to install a new drive into my current laptop.

And I also couldn't find the drive covers listed on Lenovo's web site. Presumably if you buy a laptop hard drive from Lenovo they come with this cover, and presumably Lenovo use this as a way to force you to buy all your laptop hard drives directly from Lenovo.

That's the sort of behaviour I'd expect from Apple, not Lenovo.

Thankfully Ann at IT was able to figure out how to order the drive covers separately. Thanks Ann!

Overall, the product was easy to install (once I had a drive cover) and works fine (apart from the wobble) but I'm still disappointed. Next time, I'll try one of newmodeus' Lenovo drive caddies:

Update 22 March 2015: This blog post now has a Russian translation;
Пост доступен на сайте softdroid.net: Замена оптического дисковода вторым жестким диском на Lenovo.

http://blog.pearce.org.nz/2012/07/replacing-lenovo-optical-drive-with.html


Tantek Celik: Dublin Core Application Profiles — A Brief Dialogue

Суббота, 21 Марта 2015 г. 03:41 + в цитатник

IndieWebCamp Cambridge 2015 is over. Having finished their ice cream and sorbet while sitting on a couch at Toscanini’s watching it snow, the topics of sameAs, reuse, and general semantics leads to a mention of Dublin Core Application Profiles.

  1. A:
    Dublin Core Application Profiles could be useful for a conceptual basis for metadata interoperation.
  2. T:
    (Yahoos for dublin core application profiles, clicks first result)
  3. T:
    Dublin Core Application Profile Guidelines (SUPERSEDED, SEE Guidelines for Dublin Core Application Profiles)
  4. T:
    Kind of like how The Judean People’s Front was superseded by The People’s Front of Judea?
  5. A:
    (nervous laugh)
  6. T:
    Guidelines for Dublin Core Application Profiles
  7. T:
    Replaces: http://dublincore.org/documents/2008/11/03/profile-guidelines/
  8. T:
    Hmm. (clicks back)
  9. T:
    Dublin Core Application Profile Guidelines
  10. T:
    Is Replaced By: Not applicable, wait, isn’t that supposed to be an inverse relationship?
  11. A:
    I’m used to this shit.
  12. T:
    (nods, clicks forward, starts scrolling, reading)
  13. T:
    We decide that the Library of Congress Subject Headings (LCSH) meet our needs. - I’m not sure the rest of the world would agree.
  14. A:
    No surprises there.
  15. T:
    The person has a name, but we want to record the forename and family name separately rather than as a single string. DCMI Metadata Terms has no such properties, so we will take the properties foaf:firstName and foaf:family_name
  16. T:
    Wait what? Not "given-name" and "family-name"? Nor "first-name" and "last-name" but "firstName" and "family_name"?!?
  17. A:
    Clearly it wasn’t proofread.
  18. T:
    But it’s in the following table too. foaf:firstName / foaf:family_name
  19. A:
    At least it’s internally consistent.
  20. A:
    Oh, this is really depressing.
  21. A:
    Did they even read the FOAF spec or did they just hear a rumour?
  22. T:
    (opens text editor)

http://tantek.com/2015/079/b1/dublin-core-application-profiles


Air Mozilla: Webdev Beer and Tell: March 2015

Суббота, 21 Марта 2015 г. 00:00 + в цитатник

Webdev Beer and Tell: March 2015 Web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on.

https://air.mozilla.org/webdev-beer-and-tell-march-2015/


Kim Moir: Scaling Yosemite

Пятница, 20 Марта 2015 г. 21:50 + в цитатник
We migrated most of our Mac OS X 10.8 (Mountain Lion) test machines to 10.10.2 (Yosemite) this quarter.

This project had two major constraints:
1) Use the existing hardware pool (~100 r5 mac minis)
2) Keep wait times sane1.  (The machines are constantly running tests most of the day due to the distributed nature of the Mozilla community and this had to continue during the migration.)

So basically upgrade all the machines without letting people notice what you're doing!

Yosemite Valley - Tunnel View Sunrise by ©jeffkrause, Creative Commons by-nc-sa 2.0

Why didn't we just buy more minis and add them to the existing pool of test machines?
  1. We run performance tests and thus need to have all the machines running the same hardware within a pool so performance comparisons are valid.  If we buy new hardware, we need to replace the entire pool at once.  Machines with different hardware specifications = useless performance test comparisons.
  2. We tried to purchase some used machines with the same hardware specs as our existing machines.  However, we couldn't find a source for them.  As Apple stops production of old mini hardware each time they announce a new one, they are difficult and expensive to source.
Apple Pi by ©apionid, Creative Commons by-nc-sa 2.0

Given that Yosemite was released last October, why we are only upgrading our test pool now?  We wait until the population of users running a new platform2 surpass those the old one before switching.

Mountain Lion -> Yosemite is an easy upgrade on your laptop.  It's not as simple when you're updating production machines that run tests at scale.

The first step was to pull a few machines out of production and verify the Puppet configuration was working.  In Puppet, you can specify commands to only run certain operating system versions. So we implemented several commands to accommodate changes for Yosemite. For instance, changing the default scrollbar behaviour, new services that interfere with test runs needed to be disabled, debug tests required new Apple security permissions configured etc.

Once the Puppet configuration was stable, I updated our configs so the people could run tests on Try and allocated a few machines to this pool. We opened bugs for tests that failed on Yosemite but passed on other platforms.  This was a very iterative process.  Run tests on try.  Look at failures, file bugs, fix test manifests. Once we had to the opt (functional) tests in a green state on try, we could start the migration.

Migration strategy
  • Disable selected Mountain Lion machines from the production pool
  • Reimage as Yosemite, update DNS and let them puppetize
  • Land patches to disable Mountain Lion tests and enable corresponding Yosemite tests on selected branches
  • Enable Yosemite machines to take production jobs
  • Reconfig so the buildbot master enable new Yosemite builders and schedule jobs appropriately
  • Repeat this process in batches
    • Enable Yosemite opt and performance tests on trunk (gecko >= 39) (50 machines)
    • Enable Yosemite debug (25 more machines)
    • Enable Yosemite on mozilla-aurora (15 more machines)
We currently have 14 machines left on Mountain Lion for mozilla-beta and mozilla-release branches.

As a I mentioned earlier, the two constraints with this project were to use the existing hardware pool that constantly runs tests in production and keep the existing wait times sane.  We encountered two major problems that impeded that goal:

It's a compliment when people say things like "I didn't realize that you updated a platform" because it means the upgrade did not cause large scale fires for all to see.  So it was a nice to hear that from one of my colleagues this week.

Thanks to philor, RyanVM and jmaher for opening bugs with respect to failing tests and greening them up.  Thanks to coop for many code reviews. Thanks dividehex for reimaging all the machines in batches and to arr for her valiant attempts to source new-to-us minis!

References
1Wait times represent the time from when a job is added to the scheduler database until it actually starts running. We usually try to keep this to under 15 minutes but this really varies on how many machines we have in the pool.
2We run tests for our products on a matrix of operating systems and operating system versions. The terminology for operating system x version in many release engineering shops is a platform.  To add to this, the list of platform we support varies across branches.  For instance, if we're going to deprecate a platform, we'll let this change ride the trains to release.

Further reading
Bug 1121175: [Tracking] Fix failing tests on Mac OSX 10.10 
Bug 1121199: Green up 10.10 tests currently failing on try 
Bug 1126493: rollout 10.10 tests in a way that doesn't impact wait times
Bug 1144206: investigate what is causing frequent talos failures on 10.10
Bug 1125998: Debug tests initially took 1.5-2x longer to complete on Yosemite


Why don't you just run these tests in the cloud?
  1. The Apple EULA severely restricts virtualization on Mac hardware. 
  2. I don't know of any major cloud vendors that offer the Mac as a platform.  Those that claim they do are actually renting racks of Macs on a dedicated per host basis.  This does not have the inherent scaling and associated cost saving of cloud computing.  In addition, the APIs to manage the machines at scale aren't there.
  3. We manage ~350 Mac minis.  We have more experience scaling Apple hardware than many vendors. Not many places run CI at Mozilla scale :-) Hopefully this will change and we'll be able to scale testing on Mac products like we do for Android and Linux in a cloud.

http://relengofthenerds.blogspot.com/2015/03/scaling-yosemite.html


Emma Irwin: P2PU Course in a Box & Mozilla Community Education

Пятница, 20 Марта 2015 г. 21:14 + в цитатник

Last year I created my first course on the P2PU platform  titled ‘Hacking Open Source Participation’,  and through that fantastic experience stumbled across a newer P2PU project called Course in a Box. Built on  Jekyll blogging software, Course in a Box makes it easy to create online educational content powered by Github Pages.

As awesome as this project is, there were a number of challenges I needed solve before adopting it for Mozilla’s Community Education Platform:

 Hierarchy

Jekyll is a blog-aware, static site generator. It uses template and layout files + markdown  +  CSS to display posts. Course in a Box comes with a top level category for content called modules, and within those modules are the content  – which works beautifully for single-course purpose

The challenge is , that we need to write education and training materials on a regular basis, and creating multiple Course in a Box(es) would be a maintenance nightmare.  What I really needed was a way to build multiple courses under one or more topics vrs the ‘one course’ model.  To do that, we needed to build out a hierarchy of content.

What I did

Visualized the menu moving from a list of course modules

cib

 

To a list of course topics.

ce

So Marketpulse, DevRel (for example) are course topics.  Topics are followed by courses, which then contain modules.

On the technical side, I added a new variable called submodules to the courses.yml data file.

2015-03-19_2313

Submodules are prefixed with the topic they belong ‘under’, for example: reps_mentor_training is a module in the topic reps.  This is also how module folders are named:

2015-03-19_2319

 

 

 

 

Using this method of prefixing modules with topics, it was super-simple to create a dropdown menu.

2015-03-19_2317

 

As far as Jekyll is concerned, these are all still ‘modules’, which means that even top level topics can have content associated.  This works great for a ‘landing page’ type of introduction to a topic.

Curriculum Modularity

As mentioned, Jekyll is a blogging platform, so there’s no depth or usability  designed into content architecture, and this is a problem with our goal of writing modular curriculum.  I wanted to make it possible to reuse curriculum across not only our instance of Course in a Box, but other instances across Mozilla well.

What I did

I created a separate repository for community curriculum and made this a git submodule  in the _includes folder of Course in a Box.

2015-03-20_1026

 

 

 

 

With this submodule & Jekyll’s include() function  – I was able easily reference our modular content from a post:

{% include community_curriculum/market_pulse/FFOS/en/introduction.md %}

The only drawback is that Jekyll expects all content referenced with include() to be in a specific folder – and so having content in with design files is – gah!  But I can live with it.

And of course we can do this for multiple repositories if we need.  By using a submodule we can stick to certain versions/releases of curriculum if needed.   Additionally, this makes it easier for contributors to focus on ‘just the content’ (and not get lost in Jeykll code) when they are forking and helping improve curriculum.

Finally

I’m thinking about bigger picture of curriculum-sharing, in large part thanks to conversations with the amazing Laura Hilliger about how we can both share and remix curriculum accross more than one instance of Course in a Box.  The challenge is with remixed curriculum, which is essentially a new version – and whether it should ‘ live’ in a difference place than the original repository fork.

My current thinking is that each Course in a Box Instance should have it’s own curriculum repository, included as a git submodule AND other submodules needed, but not unique to the platform. This  repo will contain all curriculum unique to that instance, including remixed versions of content from other repositories.   (IMHO)  Remixed content should not live in the original fork, ans you risk becoming increasing out of sync with the original.

So that’s where I am right now, welcoming feedback & suggestions on our Mozilla Community Education platform (with gratitude to P2PU for making it possible)

 

 

 

 

 

http://tiptoes.ca/p2pu-course-in-a-box-mozilla-community-education/


Air Mozilla: Webmaker Demos March 20 2015

Пятница, 20 Марта 2015 г. 20:00 + в цитатник

Doug Belshaw: Web Literacy Map v1.5

Пятница, 20 Марта 2015 г. 19:31 + в цитатник

I’m delighted to announce that, as a result of a process that started back in late August 2014, the Mozilla community has defined the skills and competencies that make up v1.5 of the Web Literacy Map.

Cheers - DiCaprio.gif

Visual design work will be forthcoming with the launch of teach.webmaker.org, but I wanted to share the list of skills and competencies as soon as possible:


EXPLORING

Reading the Web

Navigation

Using software tools to browse the web

  • Accessing the web using the common features of a browser
  • Using hyperlinks to access a range of resources on the web
  • Reading, evaluating, and manipulating URLs
  • Recognizing the common visual cues in web services
  • Exploring browser add-ons and extensions to provide additional functionality

Web Mechanics

Understanding the web ecosystem and Internet stack

  • Using and understanding the differences between URLs, IP addresses and search terms
  • Identifying where data is in the network of devices that makes up the Internet
  • Exporting, moving, and backing up data from web services
  • Explaining the role algorithms play in creating and managing content on the web
  • Creating or modifying an algorithm to serve content from around the web

Search

Locating information, people and resources via the web

  • Developing questions to aid a search
  • Using and revising keywords to make web searches more efficient
  • Evaluating search results to determine if the information is relevant
  • Finding real-time or time-sensitive information using a range of search techniques
  • Discovering information and resources by asking people within social networks

Credibility

Critically evaluating information found on the web

  • Comparing and contrasting information from a number of sources
  • Making judgments based on technical and design characteristics
  • Discriminating between ‘original’ and derivative web content
  • Identifying and investigating the author or publisher of web resources
  • Evaluating how purpose and perspectives shape web resources

Security

Keeping systems, identities, and content safe

  • Recommending how to avoid online scams and 'phishing’
  • Managing and maintaining account security
  • Encrypting data and communications using software and add-ons
  • Changing the default behavior of websites, add-ons and extensions to make web browsing more secure

BUILDING

Writing the web

Composing for the web

Creating and curating content for the web

  • Inserting hyperlinks into a web page
  • Identifying and using HTML tags
  • Embedding multimedia content into a web page
  • Creating web resources in ways appropriate to the medium/genre
  • Setting up and controlling a space to publish on the Web

Remixing

Modifying existing web resources to create something new

  • Identifying remixable content
  • Combining multimedia resources to create something new on the web
  • Shifting context and meaning by creating derivative content
  • Citing and referencing original content

Designing for the web

Enhancing visual aesthetics and user experiences

  • Using CSS properties to change the style and layout of a Web page
  • Demonstrating the difference between inline, embedded and external CSS
  • Improving user experiences through feedback and iteration
  • Creating device-agnostic web resources

Coding / Scripting

Creating interactive experiences on the web

  • Reading and explaining the structure of code
  • Identifying and applying common coding patterns and concepts
  • Adding comments to code for clarification and attribution
  • Applying a script framework
  • Querying a web service using an API

Accessibility

Communicating in a universally-recognisable way

  • Using empathy and awareness to inform the design of web content that is accessible to all users
  • Designing for different cultures which may have different interpretations of design elements
  • Comparing and exploring how different interfaces impact diverse users
  • Improving the accessibility of a web page through the design of its color scheme, structure/hierarchy and markup
  • Comparing and contrasting how different interfaces impact diverse web users

CONNECTING

Participating on the web

Sharing

Providing access to web resources

  • Creating and using a system to distribute web resources to others
  • Contributing and finding content for the benefit of others
  • Creating, curating, and circulating web resources to elicit peer feedback
  • Understanding the needs of audiences in order to make relevant contributions to a community
  • Identifying when it is safe to contribute content in a variety of situations on the web

Collaborating

Creating web resources with others

  • Choosing a Web tool to use for a particular contribution/ collaboration
  • Co-creating Web resources
  • Configuring notifications to keep up-to-date with community spaces and interactions
  • Working towards a shared goal using synchronous and asynchronous tools
  • Developing and communicating a set of shared expectations and outcomes

Community Participation

Getting involved in web communities and understanding their practices

  • Engaging in web communities at varying levels of activity
  • Respecting community norms when expressing opinions in web discussions
  • Making sense of different terminology used within online communities
  • Participating in both synchronous and asynchronous discussions

Privacy

Examining the consequences of sharing data online

  • Debating privacy as a value and right in a networked world
  • Explaining ways in which unsolicited third parties can track users across the web
  • Controlling (meta)data shared with online services
  • Identifying rights retained and removed through user agreements
  • Managing and shaping online identities

Open Practices

Helping to keep the web democratic and universally accessible

  • Distinguishing between open and closed licensing
  • Making web resources available under an open license
  • Contributing to an Open Source project
  • Advocating for an open web

Thanks goes to the dedicated Mozilla contributors who steadfastly worked on this over the last few months. They’re listed here. We salute you!

Any glaring errors? Typos? Let us know! You can file an issue on GitHub.


Questions? Comments? Try and put them in the GitHub repo, but you can also grab me on Twitter (@dajbelshaw) or by email (doug@mozillafoundation.org

http://literaci.es/web-literacy-map-v1-5


Michael Kaply: CCK2 2.0.21 released

Пятница, 20 Марта 2015 г. 18:04 + в цитатник

I've released new version of the CCK. New features include:

  • Setting a lightweight theme
  • Clearing preferences
  • Setting user preference values (versus default or locking)
  • More control over the CA trust string
  • Security devices are loaded at startup and fail gracefully (so multiple platforms can be specified)
  • Redesign of security devices dialog
  • Distribution info on about dialog is no longer bolded
  • Proxy information can be set in the preference page (if you want user values, not default/locked)
  • Better migration of bookmarks between versions
  • Better errors for cert download failures

Bugs fixed include:

  • International characters not working properly
  • CA trust string not being used
  • Unable to set the plugin.disable_full_page_plugin_for_types
  • Bookmarks not deleted when migrating from CCK Wizard

If you find bugs, please report them at cck2.freshdesk.com.

Priority support is given to folks with support subscriptions. If the CCK2 is important to your company, please consider purchasing one.

https://mike.kaply.com/2015/03/20/cck2-2-0-21-released/


Mozilla Reps Community: Reps Weekly Call – March 19th 2015

Пятница, 20 Марта 2015 г. 15:26 + в цитатник

Last Thursday we had our weekly call about the Reps program, where we talk about what’s going on in the program and what Reps have been doing during the last week.

reps-ivorycoast

Summary

  • FOSSASIA 2015 Updates
  • Maker Party Jaipur
  • Update on Council + Peers meetup
  • Education

Detailed notes

AirMozilla video

Don’t forget to comment about this call on Discourse and we hope to see you next week!

https://blog.mozilla.org/mozillareps/2015/03/20/reps-weekly-call-march-19th-2015/


Gregory Szorc: New High Scores for hg.mozilla.org

Пятница, 20 Марта 2015 г. 12:55 + в цитатник

It's been a rough week.

The very short summary of events this week is that both the Firefox and Firefox OS release automation has been performing a denial of service attack against hg.mozilla.org.

On the face of it, this is nothing new. The release automation is by far the top consumer of hg.mozilla.org data, requesting several terabytes per day via several million HTTP requests from thousands of machines in multiple data centers. The very nature of their existence makes them a significant denial of service threat.

Lots of things went wrong this week. While a post mortem will shed light on them, many fall under the umbrella of release automation was making more requests than it should have and was doing so in a way that both increased the chances of an outage occurring and increased the chances of a prolonged outage. This resulted in the hg.mozilla.org servers working harder than they ever have. As a result, we have some new high scores to share.

  • On UTC day March 19, hg.mozilla.org transferred 7.4 TB of data. This is a significant increase from the ~4 TB we expect on a typical weekday. (Even more significant when you consider that most load is generated during peak hours.)

  • During the 1300 UTC hour of March 17, the cluster received 1,363,628 HTTP requests. No HTTP 503 Service Not Available errors were encountered in that window! 300,000 to 400,000 requests per hour is typical.

  • During the 0800 UTC hour of March 19, the cluster transferred 776 GB of repository data. That comes out to at least 1.725 Gbps on average (I didn't calculate TCP and other overhead). Anything greater than 250 GB per hour is not very common. No HTTP 503 errors were served from the origin servers during this hour!

We encountered many periods where hg.mozilla.org was operating more than twice its normal and expected operating capacity and it was able to handle the load just fine. As a server operator, I'm proud of this. The servers were provisioned beyond what is normally needed of them and it took a truly exceptional event (or two) to bring the service down. This is generally a good way to do hosted services (you rarely want to be barely provisioned because you fall over at the slighest change and you don't want to be grossly over-provisioned because you are wasting money on idle resources).

Unfortunately, the hg.mozilla.org service did fall over. Multiple times, in fact. There is room to improve. As proud as I am that the service operated well beyond its expected limits, I can't help but feel ashamed that it did eventual cave in under even extreme load and that people are probably making under-informed general assumptions like Mercurial can't scale. The simple fact of the matter is that clients cumulatively generated an exceptional amount of traffic to hg.mozilla.org this week. All servers have capacity limits. And this week we encountered the limit for the current configuration of hg.mozilla.org. Cause and effect.

http://gregoryszorc.com/blog/2015/03/19/new-high-scores-for-hg.mozilla.org


Daniel Stenberg: curl, 17 years old today

Пятница, 20 Марта 2015 г. 10:04 + в цитатник

Today we celebrate the fact that it is exactly 17 years since the first public release of curl. I have always been the lead developer and maintainer of the project.

Birthdaycake

When I released that first version in the spring of 1998, we had only a handful of users and a handful of contributors. curl was just a little tool and we were still a few years out before libcurl would become a thing of its own.

The tool we had been working on for a while was still called urlget in the beginning of 1998 but as we just recently added FTP upload capabilities that name turned wrong and I decided cURL would be more suitable. I picked ‘cURL’ because the word contains URL and already then the tool worked primarily with URLs, and I thought that it was fun to partly make it a real English word “curl” but also that you could pronounce it “see URL” as the tool would display the contents of a URL.

Much later, someone (I forget who) came up with the “backronym” Curl URL Request Library which of course is totally awesome.

17 years are 6209 days. During this time we’ve done more than 150 public releases containing more than 2600 bug fixes!

We started out GPL licensed, switched to MPL and then landed in MIT. We started out using RCS for version control, switched to CVS and then git. But it has stayed written in good old C the entire time.

The term “Open Source” was coined 1998 when the Open Source Initiative was started just the month before curl was born, which was superseded with just a few days by the announcement from Netscape that they would free their browser code and make an open browser.

We’ve hosted parts of our project on servers run by the various companies I’ve worked for and we’ve been on and off various free services. Things come and go. Virtually nothing stays the same so we better just move with the rest of the world. These days we’re on github a lot. Who knows how long that will last…

We have grown to support a ridiculous amount of protocols and curl can be built to run on virtually every modern operating system and CPU architecture.

The list of helpful souls who have contributed to make curl into what it is now have grown at a steady pace all through the years and it now holds more than 1200 names.

Employments

In 1998, I was employed by a company named Frontec Tekniksystem. I would later leave that company and today there’s nothing left in Sweden using that name as it was sold and most employees later fled away to other places. After Frontec I joined Contactor for many years until I started working for my own company, Haxx (which we started on the side many years before that), during 2009. Today, I am employed by my forth company during curl’s life time: Mozilla. All through this project’s lifetime, I’ve kept my work situation separate and I believe I haven’t allowed it to disturb our project too much. Mozilla is however the first one that actually allows me to spend a part of my time on curl and still get paid for it!

The Netscape announcement which was made 2 months before curl was born later became Mozilla and the Firefox browser. Where I work now…

Future

I’m not one of those who spend time glazing toward the horizon dreaming of future grandness and making up plans on how to go there. I work on stuff right now to work tomorrow. I have no idea what we’ll do and work on a year from now. I know a bunch of things I want to work on next, but I’m not sure I’ll ever get to them or whether they will actually ship or if they perhaps will be replaced by other things in that list before I get to them.

The world, the Internet and transfers are all constantly changing and we’re adapting. No long-term dreams other than sticking to the very simple and single plan: we do file-oriented internet transfers using application layer protocols.

Rough estimates say we may have a billion users already. Chances are, if things don’t change too drastically without us being able to keep up, that we will have even more in the future.

1000 million users

It has to feel good, right?

I will of course point out that I did not take curl to this point on my own, but that aside the ego-boost this level of success brings is beyond imagination. Thinking about that my code has ended up in so many places, and is driving so many little pieces of modern network technology is truly mind-boggling. When I specifically sit down or get a reason to think about it at least.

Most of the days however, I tear my hair when fixing bugs, or I try to rephrase my emails to no sound old and bitter (even though I can very well be that) when I once again try to explain things to users who can be extremely unfriendly and whining. I spend late evenings on curl when my wife and kids are asleep. I escape my family and rob them of my company to improve curl even on weekends and vacations. Alone in the dark (mostly) with my text editor and debugger.

There’s no glory and there’s no eternal bright light shining down on me. I have not climbed up onto a level where I have a special status. I’m still the same old me, hacking away on code for the project I like and that I want to be as good as possible. Obviously I love working on curl so much I’ve been doing it for over seventeen years already and I don’t plan on stopping.

Celebrations!

Yeps. I’ll get myself an extra drink tonight and I hope you’ll join me. But only one, we’ll get back to work again afterward. There are bugs to fix, tests to write and features to add. Join in the fun! My backlog is only growing…

http://daniel.haxx.se/blog/2015/03/20/curl-17-years-old-today/


Gregory Szorc: New High Scores for hg.mozilla.org

Пятница, 20 Марта 2015 г. 06:22 + в цитатник

It's been a rough week.

The very short summary of events this week is that Firefox's release automation has been performing a denial of service attack against hg.mozilla.org.

On the face of it, this is nothing new. The Firefox release automation is by far the top consumer of hg.mozilla.org data, requesting several terabytes per day via several million HTTP requests from thousands of machines in multiple data centers. The very nature of their existence makes them a significant denial of service threat.

Lots of things went wrong this week. While a post mortem will shed light on them, many fall under the umbrella of Firefox release automation was making more requests than it should have and was doing so in a way that increased the chances of a prolonged service outage. This resulted in the hg.mozilla.org servers working harder than they ever have. As a result, we have some new high scores to share.

  • On UTC day March 19, hg.mozilla.org transferred 7.4 TiB of data. This is a significant increase from the ~4 TiB we expect on a typical weekday. (Even more significant when you consider that most load is generated during peak hours.)

  • During the 1300 UTC hour of March 17, the cluster received 1,363,628 HTTP requests. No HTTP 503 Service Not Available errors were encountered in that window! 300,000 to 400,000 requests per hour is typical.

  • During the 0800 UTC hour of March 19, the cluster transferred 776 GiB of repository data. That comes out to at least 1.725 Gbps on average (I didn't calculate TCP and other overhead). Anything greater than 250 GiB per hour is not very common. No HTTP 503 errors were served from the origin servers during this hour!

We encountered many periods where hg.mozilla.org was operating more than twice its normal and expected operating capacity and it was able to handle the load just fine. As a server operator, I'm proud of this. The servers were provisioned beyond what is normally needed of them and it took a truly exceptional event (or two) to bring the service down. This is generally a good way to do hosted services (you rarely want to be barely provisioned because you fall over at the slighest change and you don't want to be grossly over-provisioned because you are wasting money on idle resources).

Unfortunately, the hg.mozilla.org service did fall over. Multiple times, in fact. There is room to improve. As proud as I am that the service operated well beyond its expected limits, I can't help but feel ashamed that it did eventual cave in under even extreme load and that people are probably making mis-informed general assumptions like Mercurial can't scale. The simple fact of the matter is that clients cumulatively generated an exceptional amount of traffic to hg.mozilla.org this week. All servers have capacity limits. And this week we encountered the limit for the current configuration of hg.mozilla.org. Cause and effect.

http://gregoryszorc.com/blog/2015/03/20/new-high-scores-for-hg.mozilla.org


Avi Halachmi: Firefox e10s Performance on Talos

Четверг, 19 Марта 2015 г. 21:57 + в цитатник

TL;DR Talos runs performance tests on Firefox e10s on mozilla-central, not yet on try-server. OS X still doesn’t work. e10s reaults are similar, with notable scroll performance improvement on Windows and Linux, and notable WebGL regression on Windows.

Electrolysis, or e10s, is a Firefox project whose goal is to spread the work of browsing the web over multiple processes. The main initial goal is to separate the UI from web content and reduce negative effects one could have over the other.

e10s is already enabled by default on Firefox Nightly builds, and tabs which run on a different process than the UI are marked with an underline at the tab’s title.

While currently the e10s team’s main focus is correctness more than performance (one bug list and another), we can start collecting performance data and understand roughly where we stand.

jmaher, wlach and myself worked to make Talos run well in e10s Firefox and provide meaningful results. The Talos harness and tests now run well on Windows and Linux, while OS X should be handled shortly (bug 1124728). Session restore tests are still not working with e10s (bug 1098357).

Talos e10s tests run by default on m-c pushes, though Treeherder still hides the e10s results (they can be unhidden from the top right corner of the Treeherder job page).

To compare e10s Talos results with non-e10s we use compare.py, a script which is available in the Talos repository. We’ve improved it recently to make such comparisons more useful. It’s also possible to use the compare-talos web tool.

Here are some numbers on Windows 7 and Ubuntu 32 comparing e10s to non-e10s Talos results of a recent build using compare.py (the output below has been made more readable but the numbers have not been modified).

At the beginning of each line:

  • A plus + means that e10s is better.
  • A minus - means that e10s is worse.

The change % value simply compare the numbers on both sides. For most tests raw numbers are lower-is-better and therefore a negative percentage means that e10s is better. Tests where higher-is-better are marked with an asterix * near the percentage value (and for these values positive percentage means that e10s is better).

Descriptions of all Talos tests and what their numbers mean.

$ python compare.py --compare-e10s --rev 42afc7ef5ccb --pgo --verbose --branch Firefox --platform Win7 --master-revision 42afc7ef5ccb

Windows 7          [ non-e10s ]             [  e10s   ]
                   [ results  ]   change %  [ results ]

-   tresize               15.1   [  +1.7%]      15.4
-   kraken              1529.3   [  +3.9%]    1589.3
+   v8_7               17798.4   [  +1.6%]*  18080.1
+   dromaeo_css         5815.2   [  +3.7%]*   6033.2
-   dromaeo_dom         1310.6   [  -0.5%]*   1304.5
+   a11yr                178.7   [  -0.2%]     178.5
++  ts_paint             797.7   [ -47.8%]     416.3
+   tpaint               155.3   [  -4.2%]     148.8
++  tsvgr_opacity        228.2   [ -56.5%]      99.2
-   tp5o                 225.4   [  +5.3%]     237.3
+   tart                   8.6   [  -1.0%]       8.5
+   tcanvasmark         5696.9   [  +0.6%]*   5732.0
++  tsvgx                199.1   [ -24.7%]     149.8
+   tscrollx               3.0   [  -0.2%]       3.0
--- glterrain              5.1   [+268.9%]      18.9
+   cart                  53.5   [  -1.2%]      52.8
++  tp5o_scroll            3.4   [ -13.0%]       3.0


$ python compare.py --compare-e10s --rev 42afc7ef5ccb --pgo --verbose --branch Firefox --platform Linux --master-revision 42afc7ef5ccb

Ubuntu 32          [ non-e10s ]             [  e10s   ]
                   [ results  ]    change   [ results ]

++  tresize               17.2   [ -25.1%]      12.9
-   kraken              1571.8   [  +2.2%]    1606.6
+   v8_7               19309.3   [  +0.5%]*  19399.8
+   dromaeo_css         5646.3   [  +3.9%]*   5866.8
+   dromaeo_dom         1129.1   [  +3.9%]*   1173.0
-   a11yr                241.5   [  +5.0%]     253.5
++  ts_paint             876.3   [ -50.6%]     432.6
-   tpaint               197.4   [  +5.2%]     207.6
++  tsvgr_opacity        218.3   [ -60.6%]      86.0
--  tp5o                 269.2   [ +21.8%]     328.0
--  tart                   6.2   [ +13.9%]       7.1
--  tcanvasmark         8153.4   [ -15.6%]*   6877.7
--  tsvgx                580.8   [ +10.2%]     639.7
++  tscrollx               9.1   [ -16.5%]       7.6
+   glterrain             22.6   [  -1.4%]      22.3
-   cart                  42.0   [  +6.5%]      44.7
++  tp5o_scroll            8.8   [ -12.4%]       7.7

For the most part, the Talos scores are comparable with a few improvements and a few regressions - most of them relatively small. Windows e10s results fare a bit better than Linux results.

Overall, that’s a great starting point for e10s!

A noticeable improvement on both platforms is tp5o-scroll. This test scrolls the top-50 Alexa pages and measures how fast it can iterate with vsync disabled (ASAP mode).

A noticeable regression on Windows is WebGL (glterrain) - Firefox with e10s performs roughly 3x slower than non-e10s Firefox - bug 1028859 (bug 1144906 should also help for Windows).

A supposed notable improvement is of the tsvg-opacity test, however, this test is sometimes too sensitive to underlying platform changes (regardless of e10s), and we should probably keep an eye on it (yet again, e.g. bug 1027481).

We don’t have bugs filed yet for most Talos e10s regressions since we don’t have systems in place to alert us of them, and it’s still not trivial for developers to obtain e10s test results (e10s doesn’t run on try-server yet, and on m-c it also doesn’t run on every batch of pushes). See bug 1144120.

Snappiness is something that both the performance team and the e10s team care deeply about, and so we’ll be working closely together when it comes time to focus on making multi-process Firefox zippy.

Thanks to vladan and mconley for their valuable comments.

http://avih.github.com/blog/2015/03/19/firefox-e10s-performance-on-talos/


Air Mozilla: Reps weekly

Четверг, 19 Марта 2015 г. 19:00 + в цитатник

Mike Conley: The Joy of Coding (Episode 6): Plugins!

Четверг, 19 Марта 2015 г. 18:13 + в цитатник

In this episode, I took the feedback of my audience, and did a bit of code review, but also a little bit of work on a bug. Specifically, I was figuring out the relationship between NPAPI plugins and Gecko Media Plugins, and how to crash the latter type (which is necessary for me in order to work on the crash report submission UI).

A minor goof – for the first few minutes, I forgot to switch my camera to my desktop, so you get prolonged exposure to my mug as I figure out how I’m going to review a patch. I eventually figured it out though. Phew!

Episode Agenda

References:
Bug 1134222 – [e10s] “Save Link As…”/”Bookmark This Link” in remote browser causes unsafe CPOW usage warning

Bug 1110887 – With e10s, plugin crash submit UI is brokenNotes

http://mikeconley.ca/blog/2015/03/19/the-joy-of-coding-episode-6-plugins/


Monica Chew: How do I turn on Tracking Protection? Let me count the ways.

Четверг, 19 Марта 2015 г. 17:58 + в цитатник

I get this question a lot from various people, so it deserves its own post. Here's how to turn on Tracking Protection in Firefox to avoid connecting to known tracking domains from Disconnect's blocklist:
  1. Visit about:config and turn on privacy.trackingprotection.enabled. Because this works Firefox 35 or later, this is my favorite method. In Firefox 37 and later, it also works on Fennec.
  2. On Fennec Nightly, visit Settings > Privacy and select the checkbox "Tracking Protection".
  3. Install Lightbeam and toggle the "Tracking Protection" button in the top-right corner. Check out the difference in visiting only 2 sites with Tracking Protection on and off!
  4. On Firefox Nightly, visit about:config and turn on browser.polaris.enabled. This will enable privacy.trackingprotection.enabled and also show the checkbox for it in about:preferences#privacy, similar to the Fennec screenshot above. Because this only works in Nightly and also requires visiting about:config, it's my least favorite option.
  5. Do any of the above and sign into Firefox Sync. Tracking Protection will be enabled on all of your desktop profiles!

http://monica-at-mozilla.blogspot.com/2015/03/how-do-i-turn-on-tracking-protection.html


Cameron Kaiser: IonPower now beyond "doesn't suck" stage

Четверг, 19 Марта 2015 г. 06:45 + в цитатник
Very pleased! IonPower (in Baseline mode) made it through the entire JavaScript JIT test suite tonight (which includes SunSpider and V8 but a lot else besides) without crashing or asserting. It doesn't pass everything yet, so we're not quite to phase 4, but the failures appear to group around some similar areas of code which suggest a common bug, and one of the failures is actually due to that particular test monkeying with JIT options we don't yet support (but will). Getting closer!

TenFourFox 31.6 is on schedule for March 31.

http://tenfourfox.blogspot.com/2015/03/ionpower-now-beyond-doesnt-suck-stage.html



Поиск сообщений в rss_planet_mozilla
Страницы: 472 ... 137 136 [135] 134 133 ..
.. 1 Календарь