Air Mozilla: The Joy of Coding - Episode 56 |
mconley livehacks on real Firefox bugs while thinking aloud.
|
Air Mozilla: Weekly SUMO Community Meeting May 4, 2016 |
This is the sumo weekly call We meet as a community every Wednesday 17:00 - 17:30 UTC The etherpad is here: https://public.etherpad-mozilla.org/p/sumo-2016-05-04
https://air.mozilla.org/weekly-sumo-community-meeting-may-4-2016/
|
Robert Kaiser: Projects Done, Looking For New Ones |
http://home.kairo.at/blog/2016-05/projects_done_looking_for_new_ones
|
David Lawrence: Happy BMO Push Day! |
the following changes have been pushed to bugzilla.mozilla.org:
discuss these changes on mozilla.tools.bmo.
https://dlawrence.wordpress.com/2016/05/04/happy-bmo-push-day-16/
|
Yunier Jos'e Sosa V'azquez: Sean White se une a Mozilla como Vice Presidente de Estrategia Tecnol'ogica |
Sean White, el fundador y CEO de BrightSky Labs se uni'o a Mozilla desde mediados del mes pasado como nuevo Vice Presidente de Estrategia Tecnol'ogica, seg'un informa Chris Beard en un art'iculo publicado en el blog de Mozilla. En este rol, Sean guiar'a los proyectos estrat'egicos en la organizaci'on, y como foco inicial, trabajar'a en nuevas tecnolog'ias emergentes en las 'areas de Realidad Virtual y Aumentada (VR & AR) y los Dispositivos Conectados.
Sean White es un ejecutivo de alta tecnolog'ia, empresario, inventor y m'usico que ha pasado su carrera liderando el desarrollo de experiencias innovadoras, sistemas y tecnolog'ias que permiten la expresi'on creativa, nos conectan el uno al otro, y mejoran nuestra comprensi'on del mundo que nos rodea. Recientemente fue el fundador y CEO de Brightsky Labs, y actualmente imparte clases de realidad mixta y aumentada en la Universidad de Stanford.
Adem'as, White es graduado en Ciencias de la Computaci'on y posee un M'aster en dicha rama, ambos obtenidos en la Universidad de Stanford, cuenta con un M'aster en Ingenier'ia Mec'anica (Universidad de Columbia) y es Doctor en Ciencias de la Computaci'on, este 'ultimo adquirido en 2009 en la Universidad de Columbia.
!Bienvenido a Mozilla Sean White!
|
Wladimir Palant: Underestimated issue: Hashing passwords without salts |
My Easy Passwords extension is quickly climbing up in popularity, right now it already ranks 9th in my list of password generators (yay!). In other words, it already has 80 users (well, that was anticlimatic). At least, looking at this list I realized that I missed one threat scenario in my security analysis of these extensions, and that I probably rated UniquePasswordBuilder too high.
The problem is that somebody could get hold of a significant number of passwords, either because they are hosting a popular service or because a popular service leaked all their passwords. Now they don’t know of course which of the passwords have been generated with a password generator. However, they don’t need to: they just take a list of most popular passwords. Then they try using each password as a master password and derive a site-specific password for the service in question. Look it up in the database, has this password been ever used? If there is a match — great, now they know who is using a password generator and what their master password is.
This approach is easiest with password generators using a weak hashing algorithm like MD5 or SHA1, lots of hashes can be calculated quickly and within a month pretty much every password will be cracked. However, even with UniquePasswordBuilder that uses a comparably strong hash this approach allows saving lots of time. The attacker doesn’t need to bruteforce each password individually, they can rather crack all of them in parallel. Somebody is bound to use a weak master password, and they don’t even need to know in advance who that is.
How does one protect himself against this attack? Easy: the generated password shouldn’t depend on master password and website only, there should also be a user-specific salt parameter. This makes sure that, even if the attacker can guess the salt, they have to re-run the calculation for each single user — simply because the same master password will result in different generated passwords for different users. Luckily, UniquePasswordBuilder is the only extension where I gave a good security rating despite missing salts. Easy Passwords and CCTOO have user-defined salts, and hash0 even generates truly random salts.
https://palant.de/2016/05/04/underestimated-issue-hashing-passwords-without-salts
|
David Burns: GeckoDriver (Marionette) Release v0.7.1 |
I have just released a new version of the Marionette, well the executable that you need to download.
The main fix in this release is the ability to send over custom profiles that will be used. To be able to use the custom profile you will need to have marionette:true
capability and pass in a profile when you instantiate your FirefoxDriver.
We have also fixed a number of minor issues like IPv6 support and compiler warnings.
We have also move the repository where our executable is developed to live under the Mozilla Organization. This is now called GeckoDriver. We will be updating the naming of it in Selenium and documentation over the next few weeks.
Since you are awesome early adopters it would be great if we could raise bugs.
I am not expecting everything to work but below is a quick list that I know doesn't work.
Switching of Frames needs to be done with either a WebElement or an index. Windows can only be switched by window handles.
If in doubt, raise bugs!
Thanks for being an early adopter and thanks for raising bugs as you find them!
http://www.theautomatedtester.co.uk/blog/2016/geckodriver-marionette-release-v0.7.1.html
|
Niko Matsakis: Non-lexical lifetimes based on liveness |
In my previous post I outlined several cases that we would like to improve with Rust’s current borrow checker. This post discusses one possible scheme for solving those. The heart of the post is two key ideas:
The rest of this post expounds on these two ideas and shows how they affect the various examples from the previous post.
To see better what these two ideas mean – and why we need both of
them – let’s look at the initial example from my previous post.
Here we are storing a reference to &mut data[..]
into the variable
slice
:
1 2 3 4 5 6 7 8 |
|
As shown, the lifetime of this reference today winds up being the
subset of the block that starts at the let
and stretches until the
ending }
. This results in compilation errors when we attempt to push
to data
. The reason is that a borrow like &mut data[..]
effectively locks
the data[..]
for the lifetime of the borrow,
meaning that data
becomes off limits and can’t be used (this
locking
is just a metaphor for the type system rules; there is of
course nothing happening at runtime).
What we would like is to observe that slice
is dead – which is
compiler-speak for it won’t ever be used again
– after the call to
capitalize
. Therefore, if we had a more flexible lifetime system, we
might compute the lifetime of the slice
reference to something that
ends right after the call to capitalize
, like so:
1 2 3 4 5 6 7 8 9 |
|
If we had this shorter lifetime, then the calls to data.push
would
be legal, since the lock
is effectively released early.
At first it might seem like all we have to do to achieve this result
is to adjust the definition of what a lifetime can be to make it more
flexible. In particular, today, once a lifetime must extend beyond the
boundaries of a single statement (e.g., beyond the let
statement
here), it must extend all the way till the end of the enclosing block.
So, by adopting a definition of lifetimes that is just a set of
points in the control-flow graph
, we lift this constraint, and we can
now express the idea of a lifetime that starts at the &mut data[..]
borrow and ends after the call to capitalize
, which we couldn’t even
express before.
But it turns out that is not quite enough. There is another rule in
the type system today that causes us a problem. This rule states that
the type of a variable must outlive the variable’s scope. In other
words, if a variable contains a reference, that reference must be
valid for the entire scope of the variable. So, in our example above,
the reference created by the &mut data[..]
borrow winds up being
stored in the variable slice
. This means that the lifetime of that
reference must include the scope of slice
– which stretches from
the let
until the closing }
. In other words, even if we adopt more
flexible lifetimes, if we change nothing else, we wind up with the
same lifetime as before.
You might think we could just remove the rule altogether, and say that
the lifetime of a reference must include all the points where the
lifetime is used, with no special treatment for references stored into
variables. In this particular example we’ve been looking at, that
would do the right thing: the lifetime of slice
would only have to
outlive the call to capitalize
. But it starts to go wrong if the
control-flow gets more complicated:
1 2 3 4 5 6 7 8 9 10 11 |
|
Here again the reference slice
is still only be required to live
until after the call to capitalize
, since that is the only place it
is used. However, in this variation, that is not the correct behavior:
the reference slice
is in fact still live after the call to
capitalize, since it will be used again in the next iteration of the
loop. The problem here is that we are entering the lifetime (after
the call to capitalize
) and then re-entering it (on the loop
backedge) but without reinitializing slice
.
One way to address this problem would be to modify the definition of a lifetime. The definition I gave earlier was very flexible and allowed any set of points in the control-flow to be included. Perhaps we want some special rules to ensure that control flow is continuous? This is the approach that RFC 396 took, for example. I initially explored this approach but found that it caused problems with more advanced cases, such as a variation on problem case 3 we will examine in a later post.
(EDITED: The paragraph above incorrectly suggested that RFC 396 had special rules around backedges. Edited to clarify.)
Instead, I have opted to weaken – but not entirely remove – the original rule. The original rule was something like this (expressed as an inference rule):
scope(x) = 's
T: 's
------------------
let x: T OK
In other words, it’s ok to declare a variable x
with type T
, as
long as T
outlive the scope 's
of that variable. My new version is more like
this:
live-range(x) = 's
T: 's
------------------
let x: T OK
Here I have substituted live-range for scope. By live-range
I mean the set of points in the CFG where `x` may be later used
,
effectively. If we apply this to our two variations, we will see that,
in the first example, the variable slice
is dead after the call to
capitalize: it will never be used again. But in the second variation,
the one with a loop, slice
is live, because it may be used in the
next iteration. This accounts for the different behavior:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
|
One problem with the analysis as I presented it thus far is that it is based on liveness of individual variables. This implies that we lose precision when references are moved into structs or tuples. So, for example, while this bit of code will type-check:
1 2 3 4 5 6 7 8 9 10 |
|
It would cause errors if we move those two references into a tuple:
1 2 3 4 5 6 7 8 |
|
This is because the variable tuple
is live until after the last
field access. However, the dynamic drop analysis is
already computing a set of fragments, which are basically minimal
paths that it needs to retain full resolution around which subparts of
a struct or tuple have been moved. We could probably use similar logic
to determine that we ought to compute the liveness of tuple.0
and
tuple.1
independently, which would make this example type-check.
(If we did so, then any use of tuple
would be considered a gen
of
both tuple.0
and tuple.1
, and any write to tuple
would be
considered a kill
of both.) This would probably subsume and be
compatible with the fragment logic used for dynamic drop, so it
could be a net simplification.
One further wrinkle that I did not discuss is that any struct with a
destructor encounters special rules. This is because the destructor
may access the references in the struct. These rules were specified in
RFC 1238 but are colloquially called
dropck
. They basically state that when we create some
variable x
whose type T
has a destructor, then T
must outlive
the parent scope of x
. That is, the references in x
don’t have
to just be valid for the scope of x
, they have to be valid for
longer than the scope of x
.
In some sense, the dropck rules remains unchanged by all I’ve
discussed here. But in another sense dropck may stop being a special
case. The reason is that, in MIR, all drops are made explicit in
the control-flow graph, and hence if a variable x
has a
destructor, that should show us as just another use
of x
, and thus
cause the lifetime of any references within to be naturally extended
to cover that destructor. I admit I haven’t had time to dig into a lot
of examples here: destructors are historically a very subtle case.
Those of you familiar with the compiler will realize that there is a
bit of a chicken-and-egg problem with what I have presented
here. Today, the compiler computes the lifetimes of all references in
the typeck
pass, which is basically the main type-checking pass that
computes the types of all expressions. We then use the output of this
pass to construct MIR. But in this proposal I am defining lifetimes as
a set of points in the MIR control-flow-graph. What gives?
To make this work, we have to change how the compiler works
internally. The rough idea is that the typeck
pass will no longer
concern itself with regions: it will erase all regions, just as trans
does. This has a number of ancillary benefits, though it also carries
a few complications we have to resolve (maybe a good topic for another
blog post!). We’ll then build MIR from this, and hence the initially
constructed MIR will also have no lifetime information (just erased
lifetimes).
Then, looking at each function in the program in turn, we’ll do a safety analysis. We’ll start by computing lifetimes – at this point, we have the MIR CFG in hand, so we can easily base them on the CFG. We’ll then run the borrowck. When we are done, we can just forget about the lifetimes entirely, since all later passes are just doing optimization and code generation, and they don’t care about lifetimes.
Another interesting question is how to represent lifetimes in the
compiler. The most obvious representation is just to use a bit-set,
but since these lifetimes would require one bit for every statement
within a function, they could grow quite big. There are a number of
ways we could optimize the representation: for example, we could track
the mutual dominator, even promoting it upwards
to the innermost
enclosing loop, and only store bits for that subportion of the
graph. This would require fewer bits but it’d be a lot more
accounting. I’m sure there are other far more clever options as well.
The first step I think would be to gather some statistics about the
size of functions, the number of inference variables per fn, and so
forth.
In any case, a key observation is that, since we only need to store lifetimes for one function at a time, and only until the end of borrowck, the precise size is not nearly as important as it would be today.
Here I presented the key ideas of my current thoughts around non-lexical lifetimes: using flexible lifetimes coupled with liveness. I motivated this by examining problem case #1 from my introduction. I also covered some of the implementation complications. In future posts, I plan to examine problem cases #2 and #3 – and in particular to describe how to extend the system to cover named lifetime parameters, which I’ve completely ignored here. (Spoiler alert: problem cases #2 and #3 are also no longer problems under this system.)
I also do want to emphasize that this plan is a
work-in-progress
. Part of my hope in posting it is that people will
point out flaws or opportunities for improvement. So I wouldn’t be
surprised if the final system we wind up with winds up looking quite
different.
(As is my wont lately, I am disabling comments on this post. If you’d like to discuss the ideas in here, please do so in this internals thread instead.)
http://smallcultfollowing.com/babysteps/blog/2016/05/04/non-lexical-lifetimes-based-on-liveness/
|
Daniel Stenberg: A book status update |
— How’s Daniel’s curl book going?
I can hear absolutely nobody asking. I’ll just go ahead and tell you anyway since I had a plan to get a first version “done” by “the summer” (of 2016). I’m not sure I believe in that time frame anymore.
I’m now north of 40,000 words with a bunch of new chapters and sections added recently and I’m now generating an index that looks okay. The PDF version is exactly 200 pages now.
The index part is mainly interesting since the platform I use to write the book on, gitbook.com, doesn’t offer any index functionality of its own so I had to hack one up and add. That’s just one additional beauty of having the book made entirely in markdown.
Based on what I’ve written so far and know I still have outstanding, I am about 70% done, indicating there are about 17,000 words left for me. At this particular point in time. The words numbers tend to grow over time as the more I write (and the completion level is sort of stuck), the more I think of new sections that I should add and haven’t yet written…
On this page you can get the latest book stats, right off the git repo.
https://daniel.haxx.se/blog/2016/05/04/a-book-status-update/
|
Daniel Stenberg: No more heartbleeds please |
As a reaction to the whole Heartbleed thing two years ago, The Linux Foundation started its Core Infrastructure Initiative (CII for short) with the intention to help track down well used but still poorly maintained projects or at least detect which projects that might need help. Where the next Heartbleed might occur.
A bunch of companies putting in money to improve projects that need help. Sounds almost like a fairy tale to me!
In order to identify which projects to help, they run their Census Project: “The Census represents CII’s current view of the open source ecosystem and which projects are at risk.”
The Census automatically extracts a lot of different meta data about open source projects in order to deduce a “Risk Index” for each project. Once you’ve assembled such a great data trove for a busload of projects, you can sort them all based on that risk index number and then you basically end up with a list of projects in a priority order that you can go through and throw code at. Or however they deem the help should be offered.
The old blog post How you know your Free or Open Source Software Project is doomed to FAIL provides such a way, but it isn’t that easy to follow programmatically. The foundation has its own 88 page white paper detailing its methods and algorithm.
All combined, this grades projects’ “risk” between 0 and 15.
Assuming that a larger number of CVEs means anything bad is just wrong. Even the most careful and active projects can potentially have large amounts of CVEs. It means they disclose what they find and that people are actually reviewing code, finding problems and are reporting problems. All good things.
Sure, security problems are not good but the absence of CVEs in a project doesn’t say that the project is one bit more secure. It could just mean that nobody ever looked closely enough or that the project doesn’t deal with responsible disclosure of the problems.
When I look through the projects they have right now, I get the feeling the resolution (0-15) is too low and they’ve shied away from more aggressively handing out penalty based on factors we all recognize in abandoned/dead projects (some of which are decently specified in Tom Calloway’s blog post mentioned above).
The result being that the projects get a score that is mostly based on what kind of project it is.
But this said, they have several improvements to their algorithm already suggested in their issue tracker. I firmly believe this will improve over time.
The top three projects, the only ones that scores 13 right now are expat, procmail and unzip. All of them really small projects (source code wise) that have been around since a very long time.
curl, being the project I of course look out for, scores a 9: many CVEs (3), written in C (2), network exposure (2), 5+ apps depend on it (2). Seriously, based on these factors, how would you say the project is situated?
In the sorted list with a little over 400 projects, curl is rated #73 (at the time of this writing at least). Just after reportbug but before libattr1. [curl summary – which is mentioning a very old curl release]
But the list of projects mysteriously lack many projects. Like I couldn’t find neither c-ares nor libssh2. They may not be super big, but they’re used by a bunch of smaller and bigger projects at least, including curl itself.
The full list of projects, their meta-data and scores are hosted in their repository on github.
I can see how projects in my own backyard have gotten some good out of this effort.
I’ve received some really great bug reports and gotten handed security problems in curl by an individual who did his digging funded by this project.
I’ve seen how the foundation sponsored a test suite for c-ares since the project lacked one. Now it doesn’t anymore!
In addition to that, the Linux Foundation has also just launched the CII Best Practices Badge Program, to allow open source projects to fill in a bunch of questions and if meeting enough requirements, they will get a “badge” to boast to the world as a “well run project” that meets current open source project best practices.
I’ve joined their mailing list and provided some of my thoughts on the current set of questions, as I consider a few of them to be, well, lets call them “less than optimal”. But then again, which project doesn’t have bugs? We can fix them!
curl is just now marked as “100% compliance” with all the best practices listed. I hope to be able to keep it like that even with future and more best practices added.
https://daniel.haxx.se/blog/2016/05/04/no-more-heartbleeds-please/
|
Allen Wirfs-Brock: How to Invent the Future |
Alan Kay famously said “The best way to predict the future is to invent it.” But how do we go about inventing a future that isn’t a simple linear extrapolation of the present?
Kay and his colleagues at Xerox PARC did exactly that over the course of the 1970s and early 1980s. They invented and prototyped the key concepts of the Personal Computing Era. Concepts that were then realized in commercial products over the subsequent two decades.
So, how was PARC so successful at “inventing the future”? Can that success be duplicated or perhaps applied at a smaller scale? I think it can. To see how, I decided to try to sketch out what happened at Xerox PARC as a pattern language.
Look Twenty Years Into the Future
If your time horizon is short you are doing product development or incremental research. That’s all right; it’s probably what most of us should be doing. But if you want to intentionally “invent the future” you need to choose a future sufficiently distant to allow time for your inventions to actually have an impact.
Extrapolate Technologies
What technologies will be available to us in twenty years? Start with the current and emerging technologies that already exist today. Which relevant technologies are likely to see exponential improvement over the next twenty years? What will they be like as they mature over that period? Assume that as the technical foundation for your future.
Focus on People
Think about how those technologies may affect people. What new human activities do they enable? Is there a human problem they may help solve? What role might those technologies have in everyday life? What could be the impact upon society as a whole?
Create a Vision
Based upon your technology and social extrapolations create a clearly articulated vision of what your desired future. It should be radically different form the present in some respects. If it isn’t, then invention won’t be required to achieve it.
A Team of Dreamers and Doers
Inventing a future requires a team with a mixture of skills. You need dreamers who are able to use their imagination to create and refine the core future vision. You also need doers who are able to take ill-defined dreams and turn them into realities using available technologies. You must have both and they must work closely together.
Prototype the Vision
Try to create a high fidelity functional approximation of your vision of the future. Use the best of today’s technology as stand-ins for your technology extrapolations. Remember what is expensive and bulky today may be cheap and tiny in your future. If the exact technical combination you need doesn’t exist today, build it.
Live Within the Prototype
It’s not enough to just build a prototype of your envisioned future. You have to use the prototype as the means for experiencing that future. What works? What doesn’t? Use you experience with the prototype to iteratively refine the vision and the prototypes.
Make It Useful to You
You’re a person who hopes to live in this future, so prototype things that will be useful to you. You will know you are on to something when your prototype become an indispensible part of your life. If it isn’t there yet, keep iterating until it is.
Amaze the World
If you are successful in applying these patterns you will invent things that are truly amazing. Show those inventions to the world. Demonstrate that your vision of the future is both compelling and achievable. Inspire other people to work towards that same future. Build products and businesses if that is your inclination, but remember that inventing the future takes more than a single organization or project. The ultimate measure of your success will be your actual impact on the future.
|
Armen Zambrano: Replay Pulse messages |
http://feedproxy.google.com/~r/armenzg_mozilla/~3/uDh_F950spU/replay-pulse-messages.html
|
Air Mozilla: Cloud Services QA Team Sync, 03 May 2016 |
Weekly sync-up, volunteer, round-robin style, on what folks are working on, having challenges with, etc.
https://air.mozilla.org/cloud-services-qa-team-sync-20160503/
|
Air Mozilla: Connected Devices Weekly Program Update, 03 May 2016 |
Weekly project updates from the Mozilla Connected Devices team.
https://air.mozilla.org/connected-devices-weekly-program-update-20160503/
|
Air Mozilla: Webdev Extravaganza: May 2016 |
Once a month web developers from across Mozilla get together to share news about the things we've shipped, news about open source libraries we maintain...
|
Chris H-C: Mailing-List Mush: End of Life for Firefox on OSX 10.6-8, ICU dropping Windows XP Support |
Apparently I’m now Windows XP Firefox Blogging Guy. Ah well, everyone’s gotta have a hobby.
The Firefox Future Releases Blog announced the end of support for Mac OSX 10.6-10.8 for Firefox. This might be our first look at how Windows XP’s end of life might be handled. I like the use of language:
All three of these versions are no longer supported by Apple. Mozilla strongly encourages our users to upgrade to a version of OS X currently supported by Apple. Unsupported operating systems receive no security updates, have known exploits, and are dangerous for you to use.
You could apply that just as easily and even more acutely to Windows XP.
But, then, why isn’t Mozilla ending support for XP in a similar announcement? Essentially it is because Windows XP is still too popular amongst Firefox users. The Windows XP Firefox population still outnumbers the Mac OSX (all versions) and Linux populations combined.
My best guess is that we’ll be able to place the remaining Windows XP Firefox users on ESR 52 which should keep the last stragglers supported into 2018. That is, if the numbers don’t suddenly decrease enough that we’re able to drop support completely before then, shuffling the users onto ESR 45 instead.
What’s nice is the positive-sounding emails at the end of the thread announcing the gains in testing infrastructure and the near-term removal of code that supported now-unused build configurations. The cost of supporting these platforms is non-0, and gains can be realized immediately after dropping support.
A key internationalization library in use by Firefox, ICU, is looking to drop Windows XP support in their next version. The long-form discussion is on dev-platform (you might want to skim the unfortunate acrimony over Firefox for Android (Fennec) present in that thread) but it boils down to: do we continue shipping old software to support Windows XP? For how long? Is this the straw that will finally break WinXP support’s back?
:milan made an excellent point on how the Windows XP support decision is likely to be made:
Dropping the XP support is *completely* not an engineering decision. It isn’t even a community decision. It is completely, 100% MoCo driven Firefox product management decision, as long as the numbers of users are where they are.
On the plus side, ICU seems to be amenable to keep Windows XP support for a little longer if we need it… but we really ought to have a firm end-of-life date for the platform if we’re to make that argument in a compelling fashion. At present we don’t have (or at least haven’t communicated) such a date. ICU may just march on without us if we don’t decide on one.
For now I will just keep an eye on the numbers. Expect a post when the Windows XP numbers finally dip below the Linux+OSX as that will be a huge psychological barrier broken.
But don’t expect that post for a few months, at least.
:chutten
|
Yunier Jos'e Sosa V'azquez: Firefox integra GTK3 en Linux y mejora la seguridad del compilador de JavaScript (JIT) |
El 26 de abril pasado, Mozilla liber'o una nueva versi'on de Firefox en la que destacan la integraci'on con GTK3 en Linux, mejoras en las seguridad del compilador JS en tiempo real, cambios en WebRTC y nuevas funcionalidades para Android e iOS. Como anunciaba @Pochy en el d'ia de ayer, esta actualizaci'on de Firefox la pueden obtener desde nuestra zona de Descargas.
Despu'es de varios meses de pruebas y desarrollo, finalmente GTK3 ha sido incluido para la versi'on en Linux. Esto permitir'a reducir la dependencia de las versiones antiguas del servidor X11, compatibilidad mejorada con HiDPI y sobre todo una mejor integraci'on con los temas.
El nuevo navegador tambi'en mejora la seguridad del compilador de JavaScript Just in Time (JIT) de SpiderMonkey. La idea es influir en el c'odigo RWX (lectura-escritura-ejecuci'on), que a veces provoca un riesgo. Inicialmente representa una excepci'on a las reglas del sistema operativo, especialmente el almacenamiento de datos en un 'area de memoria donde pueden ser ejecutados (leer), pero no escribirse.
Para remediar este problema, Mozilla he empleado un mecanismo denominado W^X. Su funci'on es la de prohibir la escritura de JavaScript en 'areas de memoria que contienen el c'odigo JIT. Este cambio ser'a a expensas de un ligero descenso en el rendimiento, que se calcula de acuerdo con el editor en un rango de 1 a 4%. Por otra parte, se invita a los creadores de algunas extensiones para probar la compatibilidad de su c'odigo de tratar con este mecanismo.
Tambi'en se ha mejorado el rendimiento y la fiabilidad de las conexiones de WebRTC, y permite el uso del m'odulo de descifrado de contenido para el contenido H.264 y AAC cuando sea posible. Mientras tanto, para los desarrolladores contamos con nuevas herramientas que ahora pueden emplear en sus trabajos, en este art'iculo publicado en el blog de Labs podr'as conocer m'as al respecto.
Si prefieres ver la lista completa de novedades, puedes llegarte hasta las notas de lanzamiento (en ingl'es).
Aclaraci'on para la versi'on m'ovil.
En las descargas se pueden encontrar 3 versiones para Android. El archivo que contiene i386 es para los dispositivos que tengan la arquitectura de Intel. Mientras que en los nombrados arm, el que dice api11 funciona con Honeycomb (3.0) o superior y el de api9 es para Gingerbread (2.3).
Puedes obtener esta versi'on desde nuestra zona de Descargas en espa~nol para Linux, Mac, Windows y Android. Si te ha gustado, por favor comparte con tus amigos esta noticia en las redes sociales. No dudes en dejarnos un comentario ;-).
|
Chris Cooper: RelEng & RelOps Weekly highlights - May 2, 2016 |
Modernize infrastructure:
Kendall and Greg have deployed new hg web nodes! They’re bigger, better, faster! The four new nodes have more processing power than the old ten nodes combined. In addition, all of the web and ssh nodes have been upgraded to CentOS 7, giving us a modern operating system and better security.
Relops and jmaher certified Windows 7 in the cloud for 40% of tests. We’re now prepping to move those tests. The rest should follow soon. From a capacity standpoint, moving any Windows testing volume into the cloud is huge.
Mark deployed new versions of hg and git to the Windows testing infrastructure.
Rob’s new mechanisms for building TaskCluster Windows workers give us transparency on what goes into a builder (single page manifests) and have now been used to successfully build Firefox under mozharness for TaskCluster with an up-to-date toolchain (mozilla-build 2.2, hg 3.7.3, python 2.7.11, vs2015 on win 2012) in ec2.
Improve Release Pipeline:
Firefox 46.0 Release Candidates (RCs) were all done with our new Release Promotion process. All that work in the beta cycle for 46.0 paid off.
Varun began work on improving Balrog’s backend to make multifile responses (such as GMP) easier to understand and configure. Historically it has been hard for releng to enlist much help from the community due to the access restrictions inherent in our systems. Kudos to Ben for finding suitable community projects in the Balrog space, and then more importantly, finding the time to mentor Varun and others through the work.
Improve CI Pipeline:
Aki’s async code has landed in taskcluster-client.py! Version 0.3.0 is now on pypi, allowing us to async all the python TaskCluster things.
Nick’s finished up his work to enable running localized (l10n) desktop builds on Try. We’ve wanted to be able to verify changes against l10n builds for a long time…this particular bug is 3 years old. There are instructions in the wiki: https://wiki.mozilla.org/ReleaseEngineering/TryServer#Desktop_l10n_jobs
With build promotion well sorted for the Firefox 46 release, releng is switching gears and jumping into the TaskCluster migration with both feet this month. Kim and Mihai will be working full-time on migration efforts, and many others within releng have smaller roles. There is still a lot of work to do just to migrate all existing Linux workloads into TaskCluster, and that will be our focus for the next 3 months.
Operational:
Vlad and Amy landed patches to decommission the old b2g bumper service and its infrastructure.
Alin created a dedicated server to run buildduty tools. This is part of an ongoing effort to separate services and tools that had previously been piggybacking on other hosts.
Amy and Jake beefed up our AWS puppetmasters and tweaked some time out values to handle the additional load of switching to puppet aspects. This will ensure that our servers stay up to date and in sync.
What’s better than handing stuff off? Turning stuff off. Hal started disabling no-longer-needed vcs-sync jobs.
Release:
Shipped Firefox 46.0RC1 and RC2, Fennec 46.0b12, Firefox and Fennec 46.0, ESR 45.1.0 and 38.8.0, Firefox and Fennec 47.0beta1, and Thunderbird 45.0b1. The week before, we shipped Firefox and Fennec 45.0.2 and 46.0b10, Firefox 45.0.2esr and Thunderbird 45.0.
For further details, check out the release notes here:
See you next week!
|
Hannes Verschore: Tracelogger gui updates |
Tracelogger is one of the tools JIT devs (especially me) use to look into performance issues and to improve the JS engine of Firefox performance-wise. It traces which functions are executing together with extra information like which engine is running. How long compilation took. How many times we are GC’ing and if we are calling VM functions …
I made the GUI a bit more powerful. First of all I moved the computation of the overview to a web worker. This should help the usability of the tool. Next to that I made it possible to toggle the categories on and off. That might make it easier to understand the graphs. I also introduced a settings popup. In the settings popup you can now choice to see absolute (cpu ticks) or relative (%) timings.
There are still a lot of improvements that are possible. Eventually it should be possible to zoom on graphs, toggle scripts on/off, see full times of scripts (instead of self time only) and maybe make it possible to show another graph (like a flame chart). Hopefully one day.
This is off course open source and available at:
https://github.com/h4writer/tracelogger/tree/master/website
|
Mozilla Open Policy & Advocacy Blog: This is what a rightsholder looks like in 2016 |
In today’s policy discussions around intellectual property, the term ‘rightsholder’ is often misconstrued as someone who supports maximalist protection and enforcement of intellectual property, instead of someone who simply holds the rights to intellectual property. This false assumption can at times create a kind of myopia, in which the breadth and variety of actors, interests, and viewpoints in the internet ecosystem – all of whom are rightsholders to one degree or another – are lost.
This is not merely a process issue – it undermines constructive dialogues aimed at achieving a balanced policy. Copyright law is, ostensibly, designed and intended to advance a range of beneficial goals, such as promoting the arts, growing the economy, and making progress in scientific endeavour. But maximalist protection policies and draconian enforcement benefit the few and not the many, hindering rather than helping these policy goals. For copyright law to enhance creativity, innovation, and competition, and ultimately to benefit the public good, we must all recognise the plurality and complexity of actors in the digital ecosystem, who can be at once IP rightsholders, creators, and consumers.
Mozilla is an example of this complex rightsholder stakeholder. As a technology company, a non-profit foundation, and a global community, we hold copyrights, trademarks, and other exclusive rights. Yet, in the pursuit of our mission, we’ve also championed open licenses to share our works with others. Through this, we see an opportunity to harness intellectual property to promote openness, competition and participation in the internet economy.
We are a rightsholder, but we are far from maximalists. Much of the code produced by Mozilla, including much of Firefox, is licensed using a free and open source software licence called the Mozilla Public License (MPL), developed and maintained by the Mozilla Foundation. We developed the MPL to strike a real balance between the interests of proprietary and open source developers in an effort to promote innovation, creativity and economic growth to benefit the public good.
Similarly, in recognition of the challenges the patent system raises for open source software development, we’re pioneering an innovative approach to patent licensing with our Mozilla Open Software Patent License (MOSPL). Today, the patent system can be used to hinder innovation by other creators. Our solution is to create patents that expressly permit everyone to innovate openly. You can read more in our terms of license here.
While these are just two initiatives from Mozilla amongst many more in the open source community, we need more innovative ideas in order to fully harness intellectual property rights to foster innovation, creation and competition. And we need policy makers to be open (pun intended) to such ideas, and to understand the place they have in the intellectual property ecosystem.
More than just our world of software development, the concept of a rightsholder is in reality broad and nuanced. In practice, we’re all rightsholders – we become rightsholders by creating for ourselves, whether we’re writing, singing, playing, drawing, or coding. And as rightsholders, we all have a stake in this rich and diverse ecosystem, and in the future of intellectual property law and policy that shapes it.
Here is some of our most recent work on IP reform:
https://blog.mozilla.org/netpolicy/2016/05/02/this-is-what-a-rightsholder-looks-like-in-2016/
|