Anthony Jones: YouTube, MSE and Firefox 37 |
http://blog.technicaldebtcollector.com/2015/03/youtube-mse-and-firefox-37.html
|
Daniel Stenberg: Summing up the birthday festivities |
I blogged about curl’s 17th birthday on March 20th 2015. I’ve done similar posts in the past and they normally pass by mostly undetected and hardly discussed. This time, something else happened.
Primarily, the blog post quickly became the single most viewed blog entry I’ve ever written – and I’ve been doing it for many many years. Already in the first day it was up, I counted more than 65,000 views.
The blog post got more comments than on any other blog post I’ve ever done. Right now they have probably stopped but there are 60 of them now, almost everyone one of them saying congratulations and/or thanks.
The posting also got discussed on both hacker news and reddit, totaling in more than 260 comments. Most of those in positive spirit.
The initial tweet I made about my blog post is the most retweeted and stared tweet I’ve ever posted. At least 87 retweets and 49 favorites (it might even grow a bit more over time). Others subsequently also tweeted the link hundreds of times. I got numerous replies and friendly call-outs on twitter saying “congrats” and “thanks” in many variations.
Spontaneously (ie not initiated or requested by me but most probably because of a comment on hacker news), I also suddenly started to get donations from the curl web site’s donation web page (to paypal). Within 24 hours from my post, I had received 35 donations from friendly fans who donated a total sum of 445 USD. A quick count revealed that the total number of donations ever through the history of curl’s lifetime was 43 before this day. In one day we had basically gotten as many as we had gotten the first 17 years.
Interesting data from this donation “race”: I got donations varying from 1 USD (yes one dollar) to 50 USD and the average donation was then 12.7 USD.
Let me end this summary by thanking everyone who in various ways made the curl birthday extra fun by being nice and friendly and some even donating some of their hard earned money. I am honestly touched by the attention and all the warmth and positiveness. Thank you for proving internet comments can be this good!
http://daniel.haxx.se/blog/2015/03/22/summing-up-the-birthday-festivities/
|
John O'Duinn: “The Race for Space” by Public Service Broadcasting |
I was happily surprised by this as a gift recently.
For me, the intermixing of old original broadcasts with original composition music worked well as an idea. Choosing which broadcasts to include was just as important as composing the right music.
I liked how the composers framed the album around 9 pivotal moments events from 1957 (launch of sputnik) to 1972 (Apollo 17, the last Apollo departing the moon). Obviously, there was a lot of broadcasts to choose from, and I liked their choices – some of which I’d never heard before. Kennedy’s “We choose to go to the moon” speech, a homage to Valentina Tereshkova (the first female in space), Apollo 8’s “see you on the flip side” (the earthrise photo taken by Apollo 8 is still one of my favourites), and the tense interactions of all the ground + flights teams in the final seconds of descent to land of Apollo 11 (including handling the 1202, 1201 errors!).
All heady stuff and well worth a listen.
http://oduinn.com/blog/2015/03/22/the-race-for-space-by-public-service-broadcasting/
|
Chris Pearce: Replacing Lenovo optical drive with second hard drive: The Lenovo adapter is disappointing |
![]() |
Lenovo Serial ATA Hard Drive Bay Adapter installed in a Lenovo T510 |
![]() |
Lenovo hard drive cover |
![]() |
Lenovo hard drive cover encasing a hard drive. |
http://blog.pearce.org.nz/2012/07/replacing-lenovo-optical-drive-with.html
|
Tantek Celik: Dublin Core Application Profiles — A Brief Dialogue |
IndieWebCamp Cambridge 2015 is over. Having finished their ice cream and sorbet while sitting on a couch at Toscanini’s watching it snow, the topics of sameAs
, reuse, and general semantics leads to a mention of Dublin Core Application Profiles.
Dublin Core Application Profiles could be useful for a conceptual basis for metadata interoperation.
(Yahoos for dublin core application profiles, clicks first result)
Dublin Core Application Profile Guidelines (SUPERSEDED, SEE Guidelines for Dublin Core Application Profiles)
Kind of like how The Judean People’s Front was superseded by The People’s Front of Judea?
(nervous laugh)
Guidelines for Dublin Core Application Profiles
Replaces: http://dublincore.org/documents/2008/11/03/profile-guidelines/
Hmm. (clicks back)
Dublin Core Application Profile Guidelines
Is Replaced By: Not applicable, wait, isn’t that supposed to be an inverse relationship?
I’m used to this shit.
(nods, clicks forward, starts scrolling, reading)
We decide that the Library of Congress Subject Headings (LCSH) meet our needs. - I’m not sure the rest of the world would agree.
No surprises there.
The person has a name, but we want to record the forename and family name separately rather than as a single string. DCMI Metadata Terms has no such properties, so we will take the propertiesfoaf:firstName
andfoaf:family_name
Wait what? Not "given-name" and "family-name"? Nor "first-name" and "last-name" but "firstName" and "family_name"?!?
Clearly it wasn’t proofread.
But it’s in the following table too.foaf:firstName
/foaf:family_name
At least it’s internally consistent.
Oh, this is really depressing.
Did they even read the FOAF spec or did they just hear a rumour?
(opens text editor)
http://tantek.com/2015/079/b1/dublin-core-application-profiles
|
Air Mozilla: Webdev Beer and Tell: March 2015 |
Web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on.
|
Kim Moir: Scaling Yosemite |
![]() |
Yosemite Valley - Tunnel View Sunrise by ©jeffkrause, Creative Commons by-nc-sa 2.0 |
![]() |
Apple Pi by ©apionid, Creative Commons by-nc-sa 2.0 |
http://relengofthenerds.blogspot.com/2015/03/scaling-yosemite.html
|
Emma Irwin: P2PU Course in a Box & Mozilla Community Education |
Last year I created my first course on the P2PU platform titled ‘Hacking Open Source Participation’, and through that fantastic experience stumbled across a newer P2PU project called Course in a Box. Built on Jekyll blogging software, Course in a Box makes it easy to create online educational content powered by Github Pages.
As awesome as this project is, there were a number of challenges I needed solve before adopting it for Mozilla’s Community Education Platform:
Jekyll is a blog-aware, static site generator. It uses template and layout files + markdown + CSS to display posts. Course in a Box comes with a top level category for content called modules, and within those modules are the content – which works beautifully for single-course purpose
The challenge is , that we need to write education and training materials on a regular basis, and creating multiple Course in a Box(es) would be a maintenance nightmare. What I really needed was a way to build multiple courses under one or more topics vrs the ‘one course’ model. To do that, we needed to build out a hierarchy of content.
Visualized the menu moving from a list of course modules
To a list of course topics.
So Marketpulse, DevRel (for example) are course topics. Topics are followed by courses, which then contain modules.
On the technical side, I added a new variable called submodules to the courses.yml data file.
Submodules are prefixed with the topic they belong ‘under’, for example: reps_mentor_training is a module in the topic reps. This is also how module folders are named:
Using this method of prefixing modules with topics, it was super-simple to create a dropdown menu.
As far as Jekyll is concerned, these are all still ‘modules’, which means that even top level topics can have content associated. This works great for a ‘landing page’ type of introduction to a topic.
As mentioned, Jekyll is a blogging platform, so there’s no depth or usability designed into content architecture, and this is a problem with our goal of writing modular curriculum. I wanted to make it possible to reuse curriculum across not only our instance of Course in a Box, but other instances across Mozilla well.
I created a separate repository for community curriculum and made this a git submodule in the _includes folder of Course in a Box.
With this submodule & Jekyll’s include() function – I was able easily reference our modular content from a post:
{% include community_curriculum/market_pulse/FFOS/en/introduction.md %}
The only drawback is that Jekyll expects all content referenced with include() to be in a specific folder – and so having content in with design files is – gah! But I can live with it.
And of course we can do this for multiple repositories if we need. By using a submodule we can stick to certain versions/releases of curriculum if needed. Additionally, this makes it easier for contributors to focus on ‘just the content’ (and not get lost in Jeykll code) when they are forking and helping improve curriculum.
I’m thinking about bigger picture of curriculum-sharing, in large part thanks to conversations with the amazing Laura Hilliger about how we can both share and remix curriculum accross more than one instance of Course in a Box. The challenge is with remixed curriculum, which is essentially a new version – and whether it should ‘ live’ in a difference place than the original repository fork.
My current thinking is that each Course in a Box Instance should have it’s own curriculum repository, included as a git submodule AND other submodules needed, but not unique to the platform. This repo will contain all curriculum unique to that instance, including remixed versions of content from other repositories. (IMHO) Remixed content should not live in the original fork, ans you risk becoming increasing out of sync with the original.
So that’s where I am right now, welcoming feedback & suggestions on our Mozilla Community Education platform (with gratitude to P2PU for making it possible)
http://tiptoes.ca/p2pu-course-in-a-box-mozilla-community-education/
|
Air Mozilla: Webmaker Demos March 20 2015 |
Webmaker Demos March 13 2015
|
Doug Belshaw: Web Literacy Map v1.5 |
I’m delighted to announce that, as a result of a process that started back in late August 2014, the Mozilla community has defined the skills and competencies that make up v1.5 of the Web Literacy Map.
Visual design work will be forthcoming with the launch of teach.webmaker.org, but I wanted to share the list of skills and competencies as soon as possible:
Reading the Web
Using software tools to browse the web
Understanding the web ecosystem and Internet stack
Locating information, people and resources via the web
Critically evaluating information found on the web
Keeping systems, identities, and content safe
Writing the web
Creating and curating content for the web
Modifying existing web resources to create something new
Enhancing visual aesthetics and user experiences
Creating interactive experiences on the web
Communicating in a universally-recognisable way
Participating on the web
Providing access to web resources
Creating web resources with others
Getting involved in web communities and understanding their practices
Examining the consequences of sharing data online
Helping to keep the web democratic and universally accessible
Thanks goes to the dedicated Mozilla contributors who steadfastly worked on this over the last few months. They’re listed here. We salute you!
Any glaring errors? Typos? Let us know! You can file an issue on GitHub.
Questions? Comments? Try and put them in the GitHub repo, but you can also grab me on Twitter (@dajbelshaw) or by email (doug@mozillafoundation.org
|
Michael Kaply: CCK2 2.0.21 released |
I've released new version of the CCK. New features include:
Bugs fixed include:
If you find bugs, please report them at cck2.freshdesk.com.
|
Mozilla Reps Community: Reps Weekly Call – March 19th 2015 |
Last Thursday we had our weekly call about the Reps program, where we talk about what’s going on in the program and what Reps have been doing during the last week.
Don’t forget to comment about this call on Discourse and we hope to see you next week!
https://blog.mozilla.org/mozillareps/2015/03/20/reps-weekly-call-march-19th-2015/
|
Gregory Szorc: New High Scores for hg.mozilla.org |
It's been a rough week.
The very short summary of events this week is that both the Firefox and Firefox OS release automation has been performing a denial of service attack against hg.mozilla.org.
On the face of it, this is nothing new. The release automation is by far the top consumer of hg.mozilla.org data, requesting several terabytes per day via several million HTTP requests from thousands of machines in multiple data centers. The very nature of their existence makes them a significant denial of service threat.
Lots of things went wrong this week. While a post mortem will shed light on them, many fall under the umbrella of release automation was making more requests than it should have and was doing so in a way that both increased the chances of an outage occurring and increased the chances of a prolonged outage. This resulted in the hg.mozilla.org servers working harder than they ever have. As a result, we have some new high scores to share.
On UTC day March 19, hg.mozilla.org transferred 7.4 TB of data. This is a significant increase from the ~4 TB we expect on a typical weekday. (Even more significant when you consider that most load is generated during peak hours.)
During the 1300 UTC hour of March 17, the cluster received 1,363,628 HTTP requests. No HTTP 503 Service Not Available errors were encountered in that window! 300,000 to 400,000 requests per hour is typical.
During the 0800 UTC hour of March 19, the cluster transferred 776 GB of repository data. That comes out to at least 1.725 Gbps on average (I didn't calculate TCP and other overhead). Anything greater than 250 GB per hour is not very common. No HTTP 503 errors were served from the origin servers during this hour!
We encountered many periods where hg.mozilla.org was operating more than twice its normal and expected operating capacity and it was able to handle the load just fine. As a server operator, I'm proud of this. The servers were provisioned beyond what is normally needed of them and it took a truly exceptional event (or two) to bring the service down. This is generally a good way to do hosted services (you rarely want to be barely provisioned because you fall over at the slighest change and you don't want to be grossly over-provisioned because you are wasting money on idle resources).
Unfortunately, the hg.mozilla.org service did fall over. Multiple times, in fact. There is room to improve. As proud as I am that the service operated well beyond its expected limits, I can't help but feel ashamed that it did eventual cave in under even extreme load and that people are probably making under-informed general assumptions like Mercurial can't scale. The simple fact of the matter is that clients cumulatively generated an exceptional amount of traffic to hg.mozilla.org this week. All servers have capacity limits. And this week we encountered the limit for the current configuration of hg.mozilla.org. Cause and effect.
http://gregoryszorc.com/blog/2015/03/19/new-high-scores-for-hg.mozilla.org
|
Daniel Stenberg: curl, 17 years old today |
Today we celebrate the fact that it is exactly 17 years since the first public release of curl. I have always been the lead developer and maintainer of the project.
When I released that first version in the spring of 1998, we had only a handful of users and a handful of contributors. curl was just a little tool and we were still a few years out before libcurl would become a thing of its own.
The tool we had been working on for a while was still called urlget in the beginning of 1998 but as we just recently added FTP upload capabilities that name turned wrong and I decided cURL would be more suitable. I picked ‘cURL’ because the word contains URL and already then the tool worked primarily with URLs, and I thought that it was fun to partly make it a real English word “curl” but also that you could pronounce it “see URL” as the tool would display the contents of a URL.
Much later, someone (I forget who) came up with the “backronym” Curl URL Request Library which of course is totally awesome.
17 years are 6209 days. During this time we’ve done more than 150 public releases containing more than 2600 bug fixes!
We started out GPL licensed, switched to MPL and then landed in MIT. We started out using RCS for version control, switched to CVS and then git. But it has stayed written in good old C the entire time.
The term “Open Source” was coined 1998 when the Open Source Initiative was started just the month before curl was born, which was superseded with just a few days by the announcement from Netscape that they would free their browser code and make an open browser.
We’ve hosted parts of our project on servers run by the various companies I’ve worked for and we’ve been on and off various free services. Things come and go. Virtually nothing stays the same so we better just move with the rest of the world. These days we’re on github a lot. Who knows how long that will last…
We have grown to support a ridiculous amount of protocols and curl can be built to run on virtually every modern operating system and CPU architecture.
The list of helpful souls who have contributed to make curl into what it is now have grown at a steady pace all through the years and it now holds more than 1200 names.
Employments
In 1998, I was employed by a company named Frontec Tekniksystem. I would later leave that company and today there’s nothing left in Sweden using that name as it was sold and most employees later fled away to other places. After Frontec I joined Contactor for many years until I started working for my own company, Haxx (which we started on the side many years before that), during 2009. Today, I am employed by my forth company during curl’s life time: Mozilla. All through this project’s lifetime, I’ve kept my work situation separate and I believe I haven’t allowed it to disturb our project too much. Mozilla is however the first one that actually allows me to spend a part of my time on curl and still get paid for it!
The Netscape announcement which was made 2 months before curl was born later became Mozilla and the Firefox browser. Where I work now…
Future
I’m not one of those who spend time glazing toward the horizon dreaming of future grandness and making up plans on how to go there. I work on stuff right now to work tomorrow. I have no idea what we’ll do and work on a year from now. I know a bunch of things I want to work on next, but I’m not sure I’ll ever get to them or whether they will actually ship or if they perhaps will be replaced by other things in that list before I get to them.
The world, the Internet and transfers are all constantly changing and we’re adapting. No long-term dreams other than sticking to the very simple and single plan: we do file-oriented internet transfers using application layer protocols.
Rough estimates say we may have a billion users already. Chances are, if things don’t change too drastically without us being able to keep up, that we will have even more in the future.
It has to feel good, right?
I will of course point out that I did not take curl to this point on my own, but that aside the ego-boost this level of success brings is beyond imagination. Thinking about that my code has ended up in so many places, and is driving so many little pieces of modern network technology is truly mind-boggling. When I specifically sit down or get a reason to think about it at least.
Most of the days however, I tear my hair when fixing bugs, or I try to rephrase my emails to no sound old and bitter (even though I can very well be that) when I once again try to explain things to users who can be extremely unfriendly and whining. I spend late evenings on curl when my wife and kids are asleep. I escape my family and rob them of my company to improve curl even on weekends and vacations. Alone in the dark (mostly) with my text editor and debugger.
There’s no glory and there’s no eternal bright light shining down on me. I have not climbed up onto a level where I have a special status. I’m still the same old me, hacking away on code for the project I like and that I want to be as good as possible. Obviously I love working on curl so much I’ve been doing it for over seventeen years already and I don’t plan on stopping.
Celebrations!
Yeps. I’ll get myself an extra drink tonight and I hope you’ll join me. But only one, we’ll get back to work again afterward. There are bugs to fix, tests to write and features to add. Join in the fun! My backlog is only growing…
http://daniel.haxx.se/blog/2015/03/20/curl-17-years-old-today/
|
Gregory Szorc: New High Scores for hg.mozilla.org |
It's been a rough week.
The very short summary of events this week is that Firefox's release automation has been performing a denial of service attack against hg.mozilla.org.
On the face of it, this is nothing new. The Firefox release automation is by far the top consumer of hg.mozilla.org data, requesting several terabytes per day via several million HTTP requests from thousands of machines in multiple data centers. The very nature of their existence makes them a significant denial of service threat.
Lots of things went wrong this week. While a post mortem will shed light on them, many fall under the umbrella of Firefox release automation was making more requests than it should have and was doing so in a way that increased the chances of a prolonged service outage. This resulted in the hg.mozilla.org servers working harder than they ever have. As a result, we have some new high scores to share.
On UTC day March 19, hg.mozilla.org transferred 7.4 TiB of data. This is a significant increase from the ~4 TiB we expect on a typical weekday. (Even more significant when you consider that most load is generated during peak hours.)
During the 1300 UTC hour of March 17, the cluster received 1,363,628 HTTP requests. No HTTP 503 Service Not Available errors were encountered in that window! 300,000 to 400,000 requests per hour is typical.
During the 0800 UTC hour of March 19, the cluster transferred 776 GiB of repository data. That comes out to at least 1.725 Gbps on average (I didn't calculate TCP and other overhead). Anything greater than 250 GiB per hour is not very common. No HTTP 503 errors were served from the origin servers during this hour!
We encountered many periods where hg.mozilla.org was operating more than twice its normal and expected operating capacity and it was able to handle the load just fine. As a server operator, I'm proud of this. The servers were provisioned beyond what is normally needed of them and it took a truly exceptional event (or two) to bring the service down. This is generally a good way to do hosted services (you rarely want to be barely provisioned because you fall over at the slighest change and you don't want to be grossly over-provisioned because you are wasting money on idle resources).
Unfortunately, the hg.mozilla.org service did fall over. Multiple times, in fact. There is room to improve. As proud as I am that the service operated well beyond its expected limits, I can't help but feel ashamed that it did eventual cave in under even extreme load and that people are probably making mis-informed general assumptions like Mercurial can't scale. The simple fact of the matter is that clients cumulatively generated an exceptional amount of traffic to hg.mozilla.org this week. All servers have capacity limits. And this week we encountered the limit for the current configuration of hg.mozilla.org. Cause and effect.
http://gregoryszorc.com/blog/2015/03/20/new-high-scores-for-hg.mozilla.org
|
Avi Halachmi: Firefox e10s Performance on Talos |
TL;DR Talos runs performance tests on Firefox e10s on mozilla-central, not yet on try-server. OS X still doesn’t work. e10s reaults are similar, with notable scroll performance improvement on Windows and Linux, and notable WebGL regression on Windows.
Electrolysis, or e10s, is a Firefox project whose goal is to spread the work of browsing the web over multiple processes. The main initial goal is to separate the UI from web content and reduce negative effects one could have over the other.
e10s is already enabled by default on Firefox Nightly builds, and tabs which run on a different process than the UI are marked with an underline at the tab’s title.
While currently the e10s team’s main focus is correctness more than performance (one bug list and another), we can start collecting performance data and understand roughly where we stand.
jmaher, wlach and myself worked to make Talos run well in e10s Firefox and provide meaningful results. The Talos harness and tests now run well on Windows and Linux, while OS X should be handled shortly (bug 1124728). Session restore tests are still not working with e10s (bug 1098357).
Talos e10s tests run by default on m-c pushes, though Treeherder still hides the e10s results (they can be unhidden from the top right corner of the Treeherder job page).
To compare e10s Talos results with non-e10s we use compare.py
, a script which is available in the Talos repository. We’ve improved it recently to make such comparisons more useful. It’s also possible to use the compare-talos web tool.
Here are some numbers on Windows 7 and Ubuntu 32 comparing e10s to non-e10s Talos results of a recent build using compare.py
(the output below has been made more readable but the numbers have not been modified).
At the beginning of each line:
+
means that e10s is better.-
means that e10s is worse.The change %
value simply compare the numbers on both sides. For most tests raw numbers are lower-is-better and therefore a negative percentage means that e10s is better. Tests where higher-is-better are marked with an asterix *
near the percentage value (and for these values positive percentage means that e10s is better).
Descriptions of all Talos tests and what their numbers mean.
$ python compare.py --compare-e10s --rev 42afc7ef5ccb --pgo --verbose --branch Firefox --platform Win7 --master-revision 42afc7ef5ccb
Windows 7 [ non-e10s ] [ e10s ]
[ results ] change % [ results ]
- tresize 15.1 [ +1.7%] 15.4
- kraken 1529.3 [ +3.9%] 1589.3
+ v8_7 17798.4 [ +1.6%]* 18080.1
+ dromaeo_css 5815.2 [ +3.7%]* 6033.2
- dromaeo_dom 1310.6 [ -0.5%]* 1304.5
+ a11yr 178.7 [ -0.2%] 178.5
++ ts_paint 797.7 [ -47.8%] 416.3
+ tpaint 155.3 [ -4.2%] 148.8
++ tsvgr_opacity 228.2 [ -56.5%] 99.2
- tp5o 225.4 [ +5.3%] 237.3
+ tart 8.6 [ -1.0%] 8.5
+ tcanvasmark 5696.9 [ +0.6%]* 5732.0
++ tsvgx 199.1 [ -24.7%] 149.8
+ tscrollx 3.0 [ -0.2%] 3.0
--- glterrain 5.1 [+268.9%] 18.9
+ cart 53.5 [ -1.2%] 52.8
++ tp5o_scroll 3.4 [ -13.0%] 3.0
$ python compare.py --compare-e10s --rev 42afc7ef5ccb --pgo --verbose --branch Firefox --platform Linux --master-revision 42afc7ef5ccb
Ubuntu 32 [ non-e10s ] [ e10s ]
[ results ] change [ results ]
++ tresize 17.2 [ -25.1%] 12.9
- kraken 1571.8 [ +2.2%] 1606.6
+ v8_7 19309.3 [ +0.5%]* 19399.8
+ dromaeo_css 5646.3 [ +3.9%]* 5866.8
+ dromaeo_dom 1129.1 [ +3.9%]* 1173.0
- a11yr 241.5 [ +5.0%] 253.5
++ ts_paint 876.3 [ -50.6%] 432.6
- tpaint 197.4 [ +5.2%] 207.6
++ tsvgr_opacity 218.3 [ -60.6%] 86.0
-- tp5o 269.2 [ +21.8%] 328.0
-- tart 6.2 [ +13.9%] 7.1
-- tcanvasmark 8153.4 [ -15.6%]* 6877.7
-- tsvgx 580.8 [ +10.2%] 639.7
++ tscrollx 9.1 [ -16.5%] 7.6
+ glterrain 22.6 [ -1.4%] 22.3
- cart 42.0 [ +6.5%] 44.7
++ tp5o_scroll 8.8 [ -12.4%] 7.7
For the most part, the Talos scores are comparable with a few improvements and a few regressions - most of them relatively small. Windows e10s results fare a bit better than Linux results.
Overall, that’s a great starting point for e10s!
A noticeable improvement on both platforms is tp5o-scroll. This test scrolls the top-50 Alexa pages and measures how fast it can iterate with vsync disabled (ASAP mode).
A noticeable regression on Windows is WebGL (glterrain) - Firefox with e10s performs roughly 3x slower than non-e10s Firefox - bug 1028859 (bug 1144906 should also help for Windows).
A supposed notable improvement is of the tsvg-opacity test, however, this test is sometimes too sensitive to underlying platform changes (regardless of e10s), and we should probably keep an eye on it (yet again, e.g. bug 1027481).
We don’t have bugs filed yet for most Talos e10s regressions since we don’t have systems in place to alert us of them, and it’s still not trivial for developers to obtain e10s test results (e10s doesn’t run on try-server yet, and on m-c it also doesn’t run on every batch of pushes). See bug 1144120.
Snappiness is something that both the performance team and the e10s team care deeply about, and so we’ll be working closely together when it comes time to focus on making multi-process Firefox zippy.
Thanks to vladan and mconley for their valuable comments.
http://avih.github.com/blog/2015/03/19/firefox-e10s-performance-on-talos/
|
Mike Conley: The Joy of Coding (Episode 6): Plugins! |
In this episode, I took the feedback of my audience, and did a bit of code review, but also a little bit of work on a bug. Specifically, I was figuring out the relationship between NPAPI plugins and Gecko Media Plugins, and how to crash the latter type (which is necessary for me in order to work on the crash report submission UI).
A minor goof – for the first few minutes, I forgot to switch my camera to my desktop, so you get prolonged exposure to my mug as I figure out how I’m going to review a patch. I eventually figured it out though. Phew!
References:
Bug 1134222 – [e10s] “Save Link As…”/”Bookmark This Link” in remote browser causes unsafe CPOW usage warning
Bug 1110887 – With e10s, plugin crash submit UI is broken – Notes
http://mikeconley.ca/blog/2015/03/19/the-joy-of-coding-episode-6-plugins/
|
Monica Chew: How do I turn on Tracking Protection? Let me count the ways. |
http://monica-at-mozilla.blogspot.com/2015/03/how-do-i-turn-on-tracking-protection.html
|
Cameron Kaiser: IonPower now beyond "doesn't suck" stage |
TenFourFox 31.6 is on schedule for March 31.
http://tenfourfox.blogspot.com/2015/03/ionpower-now-beyond-doesnt-suck-stage.html
|