Gregory Szorc: Append I/O Performance on Windows |
A few weeks ago, some coworkers were complaining about the relative performance of Mercurial cloning on Windows. I investigated on my brand new i7-6700K Windows 10 desktop machine and sure enough they were correct: cloning times on Windows were several minutes slower than Linux on the same hardware. What gives?
I performed a few clones with Python under a profiler. It pointed to a potential slowdown in file I/O. I wanted more details so I fired up Sysinternals Process Monitor (strace for Windows) and captured data for a clone.
As I was looking at the raw system calls related to I/O, something immediately popped out: CloseFile() operations were frequently taking 1-5 milliseconds whereas other operations like opening, reading, and writing files only took 1-5 microseconds. That's a 1000x difference!
I wrote a custom Python script to analyze an export of Process Monitor's data. Sure enough, it said we were spending hundreds of seconds in CloseFile() operations (it was being called a few hundred thousand times). I posted the findings to some mailing lists. Follow-ups in Mozilla's dev-platform list pointed me to an old MSDN blog post where it documents behavior similar to what I was seeing.
Long story short, closing file handles that have been appended to is slow on Windows. This is apparently due to an implementation detail of NTFS. Writing to a file in place is fine and only takes microseconds for the open, write, and close. But if you append a file, closing the associated file handle is going to take a few milliseconds. Even if you are using Overlapped I/O (async I/O on Windows), the CloseHandle() call to close the file handle blocks the calling thread! Seriously.
This behavior is in stark contrast to Linux and OS X, where system I/O functions take microseconds (assuming your I/O subsystem can keep up).
There are two ways to work around this issue:
Armed with this knowledge, I dug into the guts of Mercurial and proceeded to write a number of patches that drastically reduced the amount of file I/O system calls during clone and pull operations. While I intend to write a blog post with the full details, cloning the Firefox repository with Mercurial 3.6 on Windows is now several minutes faster. Pretty much all of this is due to reducing the number of file close operations by aggressively reusing file handles.
I also experimented with moving file close operations to a separate thread on Windows. While this change didn't make it into Mercurial 3.6, the results were very promising. Even on Python (which doesn't have real asynchronous threads due to the GIL), moving file closing to a background thread freed up the main thread to do the CPU heavy work of processing data. This made clones several minutes faster. (Python does release the GIL when performing an I/O system call.) Furthermore, simply creating a dedicated thread for closing file handles made Mercurial faster than 7-zip at writing tens of thousands of files from an uncompressed tar archive. (I'm not going to post the time for tar on Windows because it is embarassing.) That's a Python process on Windows faster than a native executable that is lauded for its speed (7-zip). Just by offloading file closing to a single separate thread. Crazy.
I can optimize file closing in Mercurial all I want. However, Mercurial's storage model relies on several files. For the Firefox repository, we have to write ~225,000 files during clone. Assuming 1ms per file close (which is generous), that's 225s (or 3:45) wall time performing file closes. That's not going to scale. I've already started experimenting with alternative storage modes that initially use 1-6 files. This should enable Mercurial clones to run at over 100 MB/s (yes, Python and Windows can do I/O that quickly if you are smart about things).
My primary takeaway is that creating/appending to thousands of files is slow on Windows and should be addressed at the architecture level by not requiring thousands of files and at the implementation level by minimizing the number of file close operations after write. If you absolutely must create/append to thousands of files, use multiple threads for at least closing file handles.
My secondary takeaway is that Sysinternals Process Monitor is amazing. I used it against Firefox and immediately found performance concerns. It can be extremely eye opening to see how your higher-level code is translated into function calls into your operating system and where the performance hot spots are or aren't at the OS level.
http://gregoryszorc.com/blog/2015/10/22/append-i/o-performance-on-windows
|
Karl Dubost: Interface Mockup and User Reality |
I was reading a thread on Firefox OS developer mailing-list. A mail from Przemek Abratowski was showing a mockup of a possible interface for the back button in Firefox OS (Do not look for the mail in the mail archive, it is missing. I reported the issue to Mozilla already). On the left side, that was posted on the list, on the right side my uglification of the content.
It's cool to show beautiful things to make a good impression, but if we want to be closer to the user interaction, we should probably test UIs in different contexts. Be site which are not mobile friendly, sites which are ugly, sites with Web compatibility issues. I have the feeling that would help to design not only beautiful UIs but also UIs more resilient in a frustration or uncomfortable mood of the user.
Just a quick thought.
Otsukare!
|
Jonathan Griffin: Engineering Productivity Update, Oct 21, 2015 |
It’s Q4, and at Mozilla that means it’s planning season. There’s a lot of work happening to define a Vision, Strategy and Roadmap for all of the projects that Engineering Productivity is working on; I’ll share progress on that over the next couple of updates.
Build System: Work is starting on a comprehensive revamp of the build system, which should make it modern, fast, and flexible. A few bits of this are underway (like migration of remaining Makefiles to moz.build); more substantial progress is being planned for Q1 and the rest of 2016.
Bugzilla: Duo 2FA support is coming soon! The necessary Bugzilla changes has landed, we’re just waiting for some licensing details to be sorted out.
Treeherder: Improvements have been made to the way that sheriffs can backfill jobs in order to bisect a regression. Meanwhile, lots of work continues on backend and frontend support for automatic starring.
Perfherder and Performance Testing: Some optimizations were made to Perfherder which has made it more performant – no one wants a slow performance monitoring dashboard! jmaher and bc are getting close to being able to run Talos on real devices via Autophone; some experimental runs are already showing up on Treeherder.
MozReview and Autoland: It’s no longer necessary to have an LDAP account in order to push commits to MozReview; all that’s needed is a Bugzilla account. This opens the door to contributors using the system. Testing of Autoland is underway on MozReview’s dev instance – expect it to be available in production soon.
TaskCluster Migration: OSX cross-compiled builds are now running in TaskCluster and appearing in Treeherder as Tier-2 jobs, for debug and static checking. The TC static checking build with likely become the official build soon (and the buildbot build retired); the debug build won’t become official until work is done to enable existing test jobs to consume the TC build.
Work is progressing on enabling TaskCluster test jobs for linux64-debug; our goal is to have these all running side-by-side the buildbot jobs this quarter, so we can compare failure rates before turning off the corresponding buildbot jobs in Q1. Moving these jobs to TaskCluster enables us to chunk them to a much greater degree, which will offer some additional flexibility in automation and improve end-to-end times for these tests significantly.
Mobile Automation: All Android test suites that show in Treeherder can now be run easily using mach.
Dev Workflow: It’s now easier to create new web-platform-tests, thanks to a new |mach web-platform-tests-create| command.
e10s Support: web-platform-tests are now running in e10s mode on linux and OSX platforms. We want to turn these and other tests in e10s mode on for Windows, but have hardware capacity problems. Discussions are underway on how to resolve this in the short-term; longer-term plans include an increase in hardware capacity.
Test Harnesses: run-by-dir is now applied to all mochitest jobs on desktop. This improves test isolation and paves the way for chunking changes which we will use to improve end-to-end times and make bisection turnaround faster. Structured logging has been rolled out to Android reftests; Firefox OS reftests still to come.
ActiveData: Work is in progress to build out a model of our test jobs running in CI, so that we can identify pieces of job setup and teardown which are too slow and targets of possible optimization, and so that we can begin to predict the effects of changes to jobs and hardware capacities.
hg.mozilla.org: Mercurial 3.6 will have built-in support for seeding clones from pre-generated bundle files, and will have improved performance for cloning, especially on Windows.
Marionette and WebDriver: Message sequencing is being added to Marionette; this will help prevent synchronization issues where the client mixes up responses. Client-side work is being done in both Python and node.js. ato wrote an article making a case against visibility checks in WebDriver.
bugzilla.mozilla.org
Treeherder
Perfherder/Performance Testing
MozReview/Autoland
TaskCluster Support
Mobile Automation
Dev Workflow
Firefox and Media Automation
General Automation
ActiveData
hg.mozilla.org
WebDriver
Marionette
https://jagriffin.wordpress.com/2015/10/21/engineering-productivity-update-oct-21-2015/
|
Daniel Pocock: A mission statement for free real-time communications |
At FOSDEM 2013, the RTC panel in the main track used the tag line "Can we finally replace Skype, Viber, Twitter and Facebook?"
Does replacing something else have the right ring to it though? Or does such a goal create a negative impression? Even worse, does the term Skype replacement fall short of being a mission statement that gives people direction and sets expectations of what we would actually like to replace it with?
Lets consider what a positive statement might look like:
Making it as easy as possible to make calls to other people and to receive calls from other people for somebody who chooses only to use genuinely Free Software, open standards, a free choice of service providers and a credible standard of privacy.
If you agree with this or if you feel you can propose a more precise statement, please come and share your thoughts on the Free RTC email list.
The value of a mission statement should not be underestimated. With the right mission statement, it should be possible to envision what the future will look like if we succeed and also if we don't. With the vision of success in mind, it should be easier for developers and the wider free software community to identify the steps that must be taken to make progress.
|
Air Mozilla: The Joy of Coding (mconley livehacks on Firefox) - Episode 31 |
Watch mconley livehack on Firefox Desktop bugs!
https://air.mozilla.org/the-joy-of-coding-mconley-livehacks-on-firefox-episode-31/
|
Yunier Jos'e Sosa V'azquez: Alba Gonz'alez: “En la comunidad hay mucho que aprender y compartir” |
Nuevamente tenemos la oportunidad de conocer m'as acerca de los integrantes de nuestra comunidad y en esta ocasi'on la invitada es una joven espa~nola, me refiero a Alba Gonz'alez o @Antiparticule como todos la conocemos.
Pregunta:/ ?C'omo te llamas y a qu'e te dedicas?
Me llamo Alba Gonz'alez Fuentes (aunque normalmente firmo como Alba G. Fuentes) y… me dedico a ser multitarea. El trabajo que m'as horas del d'ia me ocupa es ser profesora de m'usica (viol'in, piano, lenguaje musical, historia…) en una escuela en Gav`a, Barcelona, pero tambi'en soy traductora, profesora de ingl'es y dedico tiempo a la investigaci'on musicol'ogica.
P:/ ?Por qu'e te decidiste a colaborar con Mozilla?
R:/ Me decid'i a colaborar con Mozilla por tres motivos fundamentales. Uno, que supe que se pod'ia colaborar traduciendo y localizando, y siendo traductora eso me atrajo mucho. Por otro lado, siempre me gustaron los temas tecnol'ogicos y lo vi como una oportunidad para aprender. Finalmente, que creo que la Web es un arma muy poderosa. Tiene cosas maravillosas y cosas horribles, pero est'a en nuestras manos hacer que se potencien las primeras y que todo el mundo tenga acceso a ellas.
P:/ ?Actualmente qu'e labor desempe~nas en la Comunidad?
R:/ Ahora mismo co-coordino el 'area de localizaci'on, junto al genial Daniel A~nez. Tambi'en soy responsable y corresponsable de varios proyectos de localizaci'on, esencialmente de los vinculados con el 'area de Community Engagement (Firefox+T'u, Redes Sociales, Firefox Friends…). Adem'as, sigo siendo localizadora y revisora ;-).
P:/ ?Qu'e es lo que m'as valoras o es lo m'as positivo de Mozilla / la Comunidad?
R:/ Lo que m'as valoro es a las personas. Tener la oportunidad de conocer a gente con la que compartir, contrastar y discutir ideas y con la intentar contribuir a algo, construir algo que pueda beneficiarnos a todos.
P:/ ?Qu'e te aporta a ti Mozilla / la Comunidad? ?Qu'e crees de la mujer dentro de Mozilla?
R:/ Mozilla me ha aportado muchas cosas. Tanto a nivel profesional, porque he tenido la oportunidad de trabajar en algo relacionado con mi profesi'on y de entrar en contacto con gente excelente de mi mismo gremio, como a nivel personal, por todo lo que supone poder colaborar con otras personas de alrededor del mundo y contribuir a algo que crees que puede ser bueno para todo el mundo.
La verdad es que no tengo un conocimiento profundo de la situaci'on de la mujer en Mozilla, pero creo que es una oportunidad excelente para continuar (y digo continuar porque creo que llevamos haci'endolo desde el principio de los tiempos) demostrando nuestra val'ia en tantos y tan diversos 'ambitos.
P:/ ?C'omo crees que ser'a Mozilla en el futuro?
R:/ La verdad es que no suelo plantearme demasiado c'omo ser'a el futuro de las cosas. Soy de esas personas que creen que las cosas pueden dar un giro brusco de un momento a otro. Lo que tengo claro es que espero que siga manteni'endose en la l'inea de apertura y libertad Web y persiguiendo objetivos como la alfabetizaci'on web o la lucha contra el control que sufren los usuarios por parte de sus gobiernos o de grandes empresas.
P:/ Eres una persona muy ocupada, tu pasatiempo/trabajo te ocupa mucho espacio, ?C'omo haces para planificarte entre la m'usica y Mozilla?
R:/ Lo cierto (tengo que reconocerlo) es que no me planifico, soy bastante ca'otica, me pongo objetivos que quiero cumplir (tambi'en soy testaruda, as'i que si no los cumplo me defraudo a m'i misma) y me voy dejando llevar un poco por lo que me apetece a cada momento. Siempre he sido una persona a la que le gusta estar permanentemente activa, as'i que me he acostumbrado a distribuir el tiempo de forma natural.
La m'usica es mi vida, en realidad. Siempre me est'a rodeando y cada d'ia le dedico horas y horas. Pero cuando estaba acabando la carrera de viol'in tocaba cinco-seis horas diarias y una parte de m'i se quem'o con todo eso. Con el ambiente, con la gente, con la forma de hacer las cosas en los conservatorios. Como dije, la m'usica sigue siendo mi vida, pero he aprendido (por necesidad) a enfocarlo todo de otra forma.
P:/ Unas palabras para las personas que desean unirse a la Comunidad.
R:/ Unirse a la comunidad merece mucho la pena: la gente es maravillosa, la misi'on y los objetivos son inspiradores y, adem'as, hay mucho que aprender y que compartir.
Muchas gracias Alba por acceder a la entrevista.
Fuente: Mozilla Hispano
http://firefoxmania.uci.cu/alba-gonzalez-en-la-comunidad-hay-mucho-que-aprender-y-compartir/
|
Nick Thomas: Try Server – please use up-to-date code to avoid upload failures |
Today we started serving an important set of directories on ftp.mozilla.org using Amazon S3, more details on that over in the newsgroups. Some configuration changes landed in the tree to make that happen.
Please rebase your try pushes to use revision 0ee21e8d5ca6 or later, currently on mozilla-inbound. Otherwise your builds will fail to upload, which means they won’t run any tests. No fun for anyone.
|
Mozilla Release Management Team: Firefox 42 beta7 to beta8 |
Extension | Occurrences |
java | 9 |
cpp | 7 |
js | 6 |
h | 5 |
c | 4 |
ini | 2 |
in | 2 |
xml | 1 |
txt | 1 |
jsm | 1 |
cfg | 1 |
Module | Occurrences |
mobile | 10 |
nsprpub | 8 |
browser | 7 |
security | 5 |
netwerk | 3 |
gfx | 3 |
dom | 2 |
widget | 1 |
modules | 1 |
media | 1 |
List of changesets:
Mark Finkle | Bug 1214234 - Be explicit about LOAD_URL telemetry from Home Panels. r=liuche, a=sylvestre - 3e94d092cb56 |
Michael Comella | Bug 1201770 - Update google search engine icon. r=margaret, a=sylvestre - df59ca3a3a0a |
Bas Schouten | Bug 1211615: Upload the full texture on the first upload for component alpha textures. r=nical a=sylvestre - 4014e85aec87 |
Wes Kocher | Backed out changeset 4014e85aec87 (Bug 1211615) for build bustage a=backout - d7c8d0af1b08 |
Kai Engert | Bug 1211586, NSPR_4_10_10_RC1 and NSS_3_19_4_RC0, a=sledru - 170d29280d87 |
Carsten Book | Bug 1213979 - h2 paket formats. r=hurley, a=al - ed67ac61d1c0 |
Jan Varga | Bug 1185223 - crash at [@ mozilla::dom::quota::QuotaObject::Release() ]; r=khuey, a=sylvestre - 1153ec762010 |
Jan Varga | Bug 1185223 - Followup build fix for Bug 1185223; r=buildbustage, a=sylvestre - 2d497358081c |
Matthew Noorenberghe | Bug 1209140 - Open a second firstrun tab for Tracking Protection promotion. r=jaws a=sylvestre - 968735b8ea8d |
Sebastian Kaspari | Bug 1213921 - Only check application restrictions to determine whether the user is on a restricted profprofile. r=ally, a=sylvestre - 135164c79784 |
Bas Schouten | Bug 1211615: Upload the full texture on the first upload for component alpha textures. r=nical a=sylvestre - 2c138fbc9513 |
Michael Comella | Bug 1208956 - Only open http* scheme in intent fallback uris. r=nalexander, a=al - 2bfd512a01af |
Valentin Gosu | Bug 1211871 - Backout Bug 1142083 r=mcmanus, a=sylvestre - 32de6f21dd48 |
Aaron Klotz | Bug 1211642: Whitelist test plugin for async plugin init; r=jimm, a=sylvestre - f585fae6c50a |
Nils Ohlmeier [:drno] | Bug 1215616: use base address for server rflx ICE candidates r=bwc, a=sylvestre - c3daaf421fe6 |
Jan-Ivar Bruaroey | Bug 1207784 - skip permission hooks in createOffer when called from hiddenWindow (add-ons). r=mt a=sylvestre - dc552539eb77 |
Jeff Muizelaar | Bug 1194335. Disable partial present on Nvidia hardware. r=bas a=sylvestre - 341d7a3d7320 |
Masayuki Nakano | Bug 1213811 Include TavultesoftKeyman 90 and 80 to the whitelist of the fix of Bug 1208043 r=emk a=sylvestre - c5bd26c10432 |
Matthew Noorenberghe | Bug 1203294 - Disable signon.rememberSignons.visibilityToggle. r=dolske a=sylvestre - bab3ced35371 |
Wes Kocher | Bug 1172627 - Disable test_instance_re-parent.html on beta for post-merge permafail. a=test-only - 8e75fa6f65b7 |
Wes Kocher | Backed out changeset f585fae6c50a (Bug 1211642) for various test failures in plugins a=backout - bff8b3d98f72 |
Kai Engert | Bug 1211586, NSPR_4_10_10_RTM and NSS_3_19_4_RTM, bump requirements in configure.in, a=sledru - a01cadb2a94d |
http://release.mozilla.org/statistics/42/2015/10/21/fx-42-b7-to-b8.html
|
Daniel Stenberg: h2 performance at Velocity NYC |
Tuesday October 13th 2015 I co-presented a talk at the Velocity conference in NYC together with Ragnar L"onn of Loadimpact. Ragnar is a friend of mine and another Swede.
The presentation was split up in two parts, in which I laid out the foundations of HTTP/2 in the first part, and Ragnar then presented the results of his performance study in the second part.
I think an interesting take away from the study is the following.
Existing sites are usually having a lot of resources that need to get downloaded. An average site has around one hundred now and the number is increasing. Those resources often have dependencies or trigger subsequent transfers. Like a HTML file gets parsed and then a CSS file is downloaded and once the CSS is downloaded it gets parsed and images specified in there are downloaded. It easily gets even more “steps” like that when downloading javascript, that triggers more javascript that renders parts of the page that causes more resources to get downloaded.
Nothing new there, right? But when switching a site like that over to HTTP/2 the performance gain will be capped at a certain percentage no matter how large latency you have to the site because what limits such a site to perform well is the time it takes to get to the end of the slowest “dependency chain”. It is less of an issue with HTTP/1.1 since if the resources are from the same site, browsers won’t do more than 6 requests in parallel anyway (on the 6 separate TCP connections it’ll use).
It becomes evident that in order to make such a site really benefit from HTTP/2, the site would have to be modified ever so slightly so that it would deliver its contents with shorter chains and allow the browsers to get more of the resources earlier, in parallel rather than serially.
Splitting up a presentation in two parts with two talkers is more difficult than doing it yourself. I think we did a decent job and we ended the presentation early. It enabled us to answer to a lot of questions and we were actually quite bombarded with them – all relevant and well considered and I think we managed to bring more to the room thanks to them. A lot of the questions were about more generic HTTP/2 and deployments though and not all exactly about the performance study of the presentation.
The audience gave us an average score of 3.74 out of 5. Not too shabby. The room seated 360 persons but it wasn’t completely filled up.
http://daniel.haxx.se/blog/2015/10/21/h2-performance-at-velocity-nyc/
|
Air Mozilla: SF WebVR Meetup: Building the VR Metaverse |
A meetup of the SF WebVR user group.
https://air.mozilla.org/sf-webvr-meetup-building-the-vr-metaverse/
|
Soledad Penades: Migrating to a new laptop (or: Apple-inflicted misery, once again) |
Yesterday I got my new laptop and the technician’s idea was to just migrate all my settings and stuff over from the old one for simplicity, using Mac OS X’s built-in migration assistant.
I actually didn’t want to do this because I liked the notion of a clean slate to get rid of old cruft that I didn’t want anymore, but I thought I would give the migration assistant the benefit of the doubt.
TL;DR: it doesn’t seem to be ready for migrating a laptop that has been given intensive usage and has plenty of small files (think: the contents of node_modules) and big files too (think: screencasts).
The new laptop is one of those ultra light MacBooks with a USB-C connector, so it doesn’t have an Ethernet connector to start with unless you add one via the semidock.
The initial attempt was to migrate data using the wireless network. After three hours and the progress barely changing from “29 hours” to “28 hours” I gave up and started reaching for the Thunderbolt to Ethernet adapters. We stopped the process and set up both computers connected to the same switch with ethernet cables. The estimation was now 4 hours, MUCH BETTER.
I calculated that it would be done at about 20h… so I just kept working with my desktop computer. I had a ton of emails to reply to, so it was OK to not to use my normal environment—you can’t use the computer while it’s being migrated.
A little bit before 20h I looked at the screen and saw “3 minutes to finish copying your documents” and I got all stupidly excited. Things were going according to plan! Yay! So I started to get ready to leave the office.
Next time I looked at the screen it said something way less encouraging: “Copying Applications… 2 hours, 30 minutes left”
I was definitely not going to wait until 22:30 hours… or even worse, because the estimation kept going up–2 hours, 40 minutes now, 3 hours… I decided to go home, not without wondering if the developer in this classic XKCD cartoon was working at Apple nowadays:
Urgh.
Today I accidentally slept in (thanks, jetlag) and when I arrived into the office, all full of hope and optimism, I found the screen stuck at “359 hours, 44 minutes left”.
I turned around to Francisco and asked him: “hey, how many days is 359 hours?” He opened up the calculator and quickly found out.
About 14 days.
And 44 minutes, of course.
I gave the migration “assistant” some more benefit of the doubt and went for lunch. When I came back it was still stuck, so it was time to disregard this “assistance” and call rsync into action.
I have most of my stuff in a ~/data directory, so migrating between computers should be easy, by just copying that folder.
Whatever wasn’t, I copied manually. For example, the Google Chrome and Chrome Canary profiles, because I didn’t want to set them up from scratch–you can copy them and keep some of the history without having to sign into Google (some of my profiles just don’t have an associated Google ID, thank you very much). Unfortunately things such as cookies are not preserved, so I need to log into websites again. Urgh, passwords.
cd ~/Library/Application\ Support
mkdir Google/Chrome -p
mkdir Google/Chrome\ Canary -p
rsync -avz --exclude '.DS_Store' sole@oldcomputer.local:/Users/sole/Library/Application\\\ Support/Google/Chrome/ ./Google/Chrome/
rsync -avz --exclude '.DS_Store' sole@oldcomputer.local:/Users/sole/Library/Application\\\ Support/Google/Chrome\\\ Canary/ ./Google/Chrome\ Canary/
I also copied the Thunderbird profiles. They are in ~/Library/Thunderbird. That way I avoided setting up my email accounts, and also my custom local rules.
I logged into my Firefox account in Nightly and it just synced and picked up my history, bookmarks, saved passwords and stuff, so I didn’t even need to bother copying the Firefox profiles. It’s very smooth! You should try it too.
Note that I did all this copying before even downloading and running the apps, to avoid them creating a default profile on their first start.
While things were copying I had a look at the list of apps I had installed and carefully selected which ones I wanted to actually re-install. Some of them I installed using homebrew, other using the always-awkward, iTunesque in spirit and behaviour, App Store. Of note: XCode has spent the whole afternoon installing.
I also took this chance to install nvm instead of just node stable, so I can experiment with various versions of node. Maybe. I guess. We’ll see if it’s more of a mess than not!
In short, it’s now almost midnight but I’m done. I started copying things at 17h, and had a few breaks to do things such as laundry, dishwasher, tidying up my flat, grocery shopping, and preparing and eating dinner, so let’s imagine it just took me 4 hours to actually copy the data I was interested in.
Moral of the story: rsync all the things. Don’t trust Apple to know better than you.
http://soledadpenades.com/2015/10/21/migrating-to-a-new-laptop-or-apple-inflicted-misery-once-again/
|
Henrik Skupin: Firefox Automation report – Q3 2015 |
It’s time for another Firefox Automation report! It’s incredible how fast a quarter passes by without that I have time to write reports more often. Hopefully it will change soon – news will be posted in a follow-up blog post.
Ok, so what happened last quarter for our projects.
One of my deliverables in Q3 was to create mozharness scripts for our various tests in the firefox-ui-tests repository, so that our custom runner scripts can be replaced. This gives us a way more stable system and additional features like crash report handling, which are necessary to reach the tier 2 level in Treeherder.
After some refactoring of the firefox ui tests, scripts for the functional and update tests were needed. But before those could be implemented I had to spent some time in refactoring some modules of mozharness to make them better configurable for non-buildbot jobs. All that worked pretty fine and finally the entry scripts have been written. Something special for them is that they even have different customers, so extra configuration files had to be placed. In detail it’s us who run the tests in Jenkins for nightly builds, and partly for release builds. On the other side Release Engineering want to run our update tests on their own hardware when releases have to be tested.
By the end of September all work has been finished. If you are interested in more details feel free to check the tracking bug 1192369.
Our Jenkins instance got lots of updates for various new features and necessary changes. All in all I pushed 27 commits which affected 53 files.
Here a list of the major changes:
Refactoring of the test jobs has been started so that those can be used for mozharness driven firefox-ui-tests later in Q4. The work has not been finished and will be continued in Q4. Especially the refactoring for report submission to Treeherder even for aborted builds will be a large change.
A lot of time had to be spent in fixing the update tests for all the changes which were coming in with the Funsize project of Release Engineering. Due to missing properties in the Mozilla Pulse messages update tests could no longer be triggered for nightly builds. Therefore the handling of Pulse messages has been completely rewritten to allow the handling of similar Pulse messages as sent out from TaskCluster. That work was actually not planned and has been stolen me quite some time from other projects.
A separation of functional and remote tests didn’t make that much sense. Especially because both types are actually functional tests. As result they have been merged together into the functional tests. You can still run remote tests only by using --tag remote
; similar for tests with local testcases by using `–tag local.
We stopped running tests for mozilla-esr31 builds due to Firefox ESR31 is no longer supported.
To lower the amount of machines we have to maintain and to getting closer what’s being run on Buildbot, we stopped running tests on Ubuntu 14.10. Means we only run on Ubuntu LTS releases from now on. Also we stopped tests for OS X 10.8. The nodes will be re-used for OS X 10.11 once released.
We experienced Java crashes due to low memory conditions of our Jenkins production master again. This was kinda critical because the server is not getting restarted automatically. After some investigation I assumed that the problem is due to the 32bit architecture of the VM. Given that it has 8GB of memory a 64bit version of Ubuntu should have been better used. So we replaced the machine and so far everything looks fine.
Totally surprising we had to release once more a bugfix release of Mozmill. This time the framework didn’t work at all due to the enforcement of add-on signing. So Mozmill 2.0.10.2 has been released.
http://www.hskupin.info/2015/10/20/firefox-automation-report-q3-2015/
|
Air Mozilla: Building and shipping software at Facebook |
Facebook has over 1 billion monthly active web users and pushes new versions of its website twice a day. At the same time, Facebook also...
|
Mozilla Security Blog: Continuing to Phase Out SHA-1 Certificates |
In our previous blog post about phasing out certificates with SHA-1 based signature algorithms, we said that we planned to take a few actions with regard to SHA-1 certificates:
We have completed the first two of these steps. We added the security warning to the Web Console in Firefox 38. If you open the Web Console and browse to a website with an SSL certificate that is SHA-1 based or is signed by a SHA-1 based intermediate certificate, you will get the following message in the console:
This site makes use of a SHA-1 Certificate; it’s recommended you use certificates with signature algorithms that use hash functions stronger than SHA-1. [Learn More]
In Firefox 43 we plan to show an overridable “Untrusted Connection” error whenever Firefox encounters a SHA-1 based certificate that has ValidFrom after Jan 1, 2016. This includes the web server certificate as well as any intermediate certificates that it chains up to. Root certificates are trusted by virtue of their inclusion in Firefox, so it does not matter how they are signed. However, it does matter what hash algorithm is used in the intermediate signatures, so the rules about phasing out SHA-1 certificates applies to both the web server certificate and the intermediate certificates that sign it.
We are re-evaluating when we should start rejecting all SHA-1 SSL certificates (regardless of when they were issued). As we said before, the current plan is to make this change on January 1, 2017. However, in light of recent attacks on SHA-1, we are also considering the feasibility of having a cut-off date as early as July 1, 2016.
We do not currently plan to display an error if an OCSP response is signed by a SHA-1 certificate. According to section 7.1.3 of version 1.3 of the CA/Browser Forum Baseline Requirements: “CAs MAY continue to sign certificates to verify OCSP responses using SHA1 until 1 January 2017.” Additionally, we do not currently plan to throw an error when SHA-1 S/MIME and client authentication certificates are encountered.
Questions about SHA-1 based certificates should be directed to the mozilla.dev.security.policy forum.
https://blog.mozilla.org/security/2015/10/20/continuing-to-phase-out-sha-1-certificates/
|
Emma Irwin: The Journey Begins! Participation @ #Mozfest 2015 |
“Participation doesn’t just happen, it’s built through great design & great leadership”
In three short weeks, the first of our Global Leadership Events: Mozfest 2015 will be upon us and with it increasing buzz of activity will be emerging as our cohort prepares for and travels to London. Mozfest is an opportunity unlike any other to learn, teach, practice and collaborate. We’ll use this journey to bring everyone closer to their personal goals for success at the event itself and as empowerment for our collective vision for Mozilla leadership in 2016.
This year’s Mozfest is a thoughtfully designed, energized mega-opportunity for learning to lead by – leading. New to Mozfest this year is the addition of Pathways, best explained as a connection of sessions under one or more themes. The most exciting part is not only that we’ve curated three pathways for Participation, but that in many cases they intersect other spaces at the festival for magnified opportunity and outreach. Our three pathways are:
Scaling participatory learning experiences This pathway is for participants with enthusiasm for reaching and facilitating, who want to take that passion to create resources and programs that teach others through participation.
Leading and building community through participation This pathway is for participants who want to deepen their practice on leading or building community, or who want to help people who are doing that.
New technologies for participation — challenge This pathway is for people who want to take a technology lens through MozFest and build new participatory experiences with these technologies.
We’re also excited to running many of these sessions, workshops, training and pop-up activities in our very own Participation Space. Think “learning, leading, making, and building” Participation all weekend – together in atmosphere intersecting every other space in the building through pathways.
And best of all – all of this is a backdrop for some pretty amazing personal goals we’ll be working with each of our cohort leaders to design through 1:1 coaching. The pathways are only the starting point in designing a Mozfest experience that brings our cohort closer to their vision for success at this event and for 2016, personal goals they’ll be sharing out in Discourse leading up to the event in November. Phew!
You can follow activities for this and other Global events on Discourse. And using hashtag #Mozfest #ParticipationSpace
Image Credit: Paul Clarke
|
Christian Heilmann: All things open talk: The ES6 Conundrum (slides/screencast/links) |
I just delivered a talk on JavaScript and ES6 at All things Open in Raleigh, North Carolina.
This is just a quick post to give you all the content and links I talked about.
Here’s the slidedeck on Slideshare
And the screencast of the talk on YouTube
Links I mentioned:
|
Cameron Kaiser: Waiting for the MP3 to do the tune thing, MP3, MP3, sing!* |
(*apologies to TMBG, "Dinner Bell" from Apollo 18)
http://tenfourfox.blogspot.com/2015/10/waiting-for-mp3-to-do-tune-thing-mp3.html
|
The Servo Blog: This Week In Servo 38 |
Please welcome Alan Jeffrey, or ajeffrey on IRC, to the Servo team! He will be doubling the size of our “Chicago office.” He is currently looking into our performance on Dromaeo, which is a heavy stress test of our Rust<->JavaScript integration.
In the last week, we landed 82 PRs in the Servo organization’s repositories. Additionally, we passed 5,000 stars on https://github.com/servo/servo!
JS
pointers dereferencable At last week’s meeting, we discussed Mozlando meetings, an update to our official governance, and the building backlog on our review queue.
|
Christian Legnitto: I’m talking at Mozilla tomorrow! |
I’m excited to give a brown bag talk at Mozilla Mountain View tomorrow about how Facebook builds and ships software (with a Release Engineering / tooling bent).
Swing by if you are in the area or catch it on Air Mozilla! More details @ https://air.mozilla.org/release-engineering-at-facebook/.
http://christian.legnitto.com/blog/2015/10/19/im-talking-at-mozilla-tomorrow/
|
Air Mozilla: Genesis: Terraforming a New Firefox Crash Stats Infrastructure |
Firefox receives millions of crash reports a day (sorry!), and the app Mozilla uses to track and correlate all of this data needed a new...
https://air.mozilla.org/genesis-terraforming-a-new-firefox-crash-stats-infrastructure/
|