Mozilla Release Management Team: Firefox 36.0.2 to 36.0.4 |
Last Friday and Saturday, we released two minor releases to fix the issues found during the pwn2own hacking contests.
Extension | Occurrences |
txt | 2 |
py | 2 |
sh | 1 |
json | 1 |
h | 1 |
cpp | 1 |
Module | Occurrences |
testing | 3 |
docshell | 2 |
mobile | 1 |
config | 1 |
browser | 1 |
List of changesets:
Steve Fink | Bug 1145255. r=luke, a=lmandel - 2b1ecc5fad12 |
Jordan Lund | Bug 1113460 - Bump mozharness.json to revision 75c435ef19ca. a=test-only - 3d681d747053 |
Jordan Lund | Bug 1142743 - Increase chunks for Android 2.3 mochitest-gl, in-tree cfg total chunk fix. r=kmoir, a=test-only - 7d23a45099ee |
Geoff Brown | Bug 1142552 - Update robocop chunking to fix rc10. r=ahal, a=test-only - e2ba5973e4bb |
Olli Pettay | Bug 1144988 - Don't let other pages to load while doing scroll-to-anchor. r=bz, a=lmandel - d5a003cc284a |
Kyle Huey | Bug 1145870. r=bz a=lmandel - 57cc76236bd7 |
http://release.mozilla.org/statistics/36/2015/03/25/fx-36.0.2-to-36.0.4.html
|
Mozilla Release Management Team: Firefox 37 beta6 to beta7 |
In this beta release, we continued to land some patches for MSE. We disabled MSE under Mac OS X for now.
We also took the fixes for the pwn2own hacking contest in this release.
Extension | Occurrences |
cpp | 54 |
h | 39 |
cc | 3 |
py | 2 |
js | 2 |
html | 2 |
json | 1 |
jsm | 1 |
ipdlh | 1 |
ipdl | 1 |
inc | 1 |
c | 1 |
build | 1 |
Module | Occurrences |
dom | 26 |
gfx | 18 |
layout | 15 |
media | 14 |
widget | 9 |
toolkit | 5 |
image | 5 |
ipc | 4 |
testing | 3 |
security | 2 |
js | 2 |
docshell | 2 |
xpfe | 1 |
modules | 1 |
caps | 1 |
browser | 1 |
List of changesets:
Matt Woodrow | Bug 1131638 - Discard video frames that fail to sync. r=cpearce, a=lmandel - 297e2e626fe9 |
Matt Woodrow | Bug 1131638 - Record invalid frames as dropped for video playback stats. r=ajones, a=lmandel - f88fcb8ccc27 |
Matt Woodrow | Bug 1131638 - Disable hardware decoding if too many frames are invalid. r=cpearce, a=lmandel - edb24ca59d13 |
Mike Hommey | Backout the part of changeset 8044e5199fe2 (Bug 1080319) that removed -remote. a=sledru - 29eac8276b62 |
Matt Woodrow | Bug 1139503 - Backlist ATI driver version for DXVA on windows 10 since it's causing crashes. r=cpearce, a=sledru - 5a8085d3a0fe |
Bill McCloskey | Back out Bug 1103036 to resolve shutdown hangs a=backout - 2cc99febbda0 |
Ryan VanderMeulen | No bug - Bump mozharness.json to revision fde96e1730cc. a=NPOTB - d16fe93d2755 |
Jordan Lund | Bug 1142743 - Increase chunks for Android 2.3 mochitest-gl, in-tree cfg total chunk fix. r=kmoir, a=test-only - 2d55d8220616 |
Geoff Brown | Bug 1142552 - Update robocop chunking to fix rc10. r=ahal, a=test-only - 0e0204877015 |
Ralph Giles | Bug 1141349 - Pref off MSE on Mac. r=ajones, a=lmandel - c8f377118985 |
Jan Varga | Bug 1067568 - Fix intermittent "ASSERTION: We don't know anyting about this file handle?!: 'Error', file dom/filehandle/FileService.cpp, line 234". r=bent, a=lsblakk - 199e30cb18f3 |
Margaret Leibovic | Bug 1141550 - Register an AsyncShutdown blocker to persist download changes. r=paolo, a=lsblakk - 3eeb35bbafd2 |
Jean-Yves Avenard | Bug 1139271 - Part 1: Add logging when encountering invalid atoms. r=k17e, a=lsblakk - 202177831c59 |
Jean-Yves Avenard | Bug 1139271 - Part 2: Ignore partial moof. r=k17e, a=lsblakk - 21384861c447 |
Jean-Yves Avenard | Bug 1139271 - Part 3: Only consider a Box to be available if entire content is available. r=k17e, a=lsblakk - f4c0cec35772 |
Paul Adenot | Bug 1141781 - Grip the VideoFrameContainer when queing a call to invalidate in the MediaStreamGraph. r=roc, a=lsblakk - 6a4e68222995 |
Matthew Gregan | Bug 1142746 - Make unexpected SL_PLAYEVENT_HEADATMARKER notification non-fatal. r=brsun, a=lsblakk - 067f83e99f66 |
Ryan VanderMeulen | Backed out changeset 6a4e68222995 (Bug 1141781) for mochitest crashes/asserts. - 6a9120be7216 |
Ethan Hugg | Bug 1144157 - Add ciscospark.com to the screenshare default whitelist r=jesup a=lmandel - bd028b4c3b95 |
Simon Montagu | Bug 1114239 patch 1: Backout Bug 1105137, r=jfkthame, a=lmandel - 0837b7d1188b |
Simon Montagu | Bug 1114239 patch 2: Backout Bug 1079139, r=jfkthame, a=lmandel - 8fca3694654a |
Simon Montagu | Bug 1114239 patch 3: Backout Bug 1062963 patch 3, r=jfkthame, a=lmandel - 470cd8c739c5 |
Olli Pettay | backout Bug 1121406 which enabled WebSocket in Workers in beta, a=abillings - f0a0d5d2d525 |
David Major | Bug 1138794: Use an alternate crash report server on Windows XP SP2. r=ted a=lmandel - caf324dbb13f |
Matthew Gregan | Bug 1124542 - WebrtcGmpVideoDecoder shouldn't crash when GMP completion callbacks are received. r=rjesup, a=lmandel - c54687cb7086 |
Ethan Hugg | Bug 1125047 - GMP should catch decoder failures. r=jesup, a=lmandel - 5598a289b442 |
Chris Pearce | Bug 1140797 - Make gmp-clearkey buildable outside of mozilla-central. r=edwin, a=lmandel - a49b40d229df |
Chris Pearce | Bug 1140797 - Prevent fatal assert when doing base64 decode in gmp-clearkey. r=edwin, a=lmandel - 29333933d6d6 |
Jordan Lund | Bug 1113460 - Bump mozharness.json to revision 75c435ef19ca. a=test-only - 938177ece421 |
Cameron McCormack | Bug 1143953 - Fix typo in test_font_loading_api.html where it incorrectly returns document.fonts.read. r=jdaggett, a=test-only - 37be317efc7a |
Cameron McCormack | Bug 1143995 - Remove unnecessary layout flushes from test_font_loading_api.html. r=jdaggett, a=test-only - e84f65c3a6aa |
Cameron McCormack | Bug 1144507 - Fix incorrect Promise usage in test_font_loading_api.html. r=jdaggett, a=test-only - 9fc579f7bf3a |
Tim Taubert | Bug 1124409 - Fix intermittent browser_bug1015721.js failures by ensuring the EventStateManager has a document before trying to dispatch ZoomChangeUsingMouseWheel. r=smaug, a=test-only - 46cfbcfb58c5 |
Tim Taubert | Bug 1124409 - Fix test_bug659071.html to properly reset page zoom before finishing. r=smaug, a=test-only - e4f1cc6f63a3 |
Nicolas B. Pierron | Bug 1137624 - Disable Array.join optimization. r=jandem, a=abillings - 968fa2b32612 |
Aaron Klotz | Bug 1141081 - Ensure nsPluginInstanceOwner::Destroy is called before returning from failed plugin instantiation. r=jimm, a=lmandel - 2710769c40a5 |
Aaron Klotz | Bug 1128064 - Check for null mContent in nsPluginInstanceOwner::GetDocument. r=jimm, a=abillings - e92558fa59eb |
Byron Campen [:bwc] | Bug 1141749 - Prevent collisions in local SSRCs. r=mt, a=abillings - d76c709556bb |
Nicolas Silva | Bug 1125848 - Reduce the likelyhood of a CompositorParent being destroyed without the proper shutdown sequence. r=sotaro a=lmandel - 45897d27ef82 |
Avi Halachmi | Bug 1142079 - Disable refresh driver telemetry on Android. r=froydnj, a=lmandel - 17adc07baf56 |
Matt Woodrow | Bug 1138967 - Part 1: Remove ISharedImage. r=nical, a=lmandel - c1356c27fa1b |
Matt Woodrow | Bug 1138967 - Part 2: Create IMFYCbCrImage so that image data copying happens off the decoder thread. r=nical, r=cpearce, a=lmandel - 07e266d45703 |
Matt Woodrow | Bug 1138967 - Part 3: Add D3D11 YCbCr texture clients and upload on the client side. r=nical, a=lmandel - 0c23dcbc6bf7 |
Masatoshi Kimura | Bug 1133187 - Update fallback whitelist. r=keeler, a=lmandel - 02b9c74353ad |
Seth Fowler | Bug 1142849 - Upliftable fix for imgRequest TSan violations. r=tn, a=lmandel - 9b7aa96d0e11 |
Karsten D"usterloh | Bug 1116952 - Treelines fragments after Bug 1105104. r=jwatt, a=lmandel - 5bd29483f85e |
Jeff Muizelaar | Bug 1130978 - Fix VisitEdges. r=kats, a=lmandel - fb9ae74a783a |
Seth Fowler | Bug 1137058 - Increment RasterImage::mLockCount to ensure that non-discardable images don't eventually become unlocked. r=tn, a=lmandel - 52b55d9c1d61 |
Matt Woodrow | Bug 1145029 - Disable DXVA for 4k videos on AMD hardware since it performs poorly. r=jya a=lmandel - 2445fcfe99d4 |
Steve Fink | Bug 1145255. r=luke, a=lmandel - aabde7671ac0 |
Jed Davis | Bug 1111079 - Backport some IPC message/channel fixes. r=bent, a=lmandel - 5bb1bb65cc28 |
Jed Davis | Bug 1111065 - Backport some upstream IPC serialization fixes. r=bent, a=lmandel - a2295cc0de06 |
Boris Zbarsky | Bug 1144991 - Be a bit more restrictive about when a URI_IS_UI_RESOURCE source is allowed to link to a URI_IS_UI_RESOURCE URI that doesn't have the same scheme. r=bholley, a=lmandel - 2e6977da201e |
Olli Pettay | Bug 1144988 - Don't let other pages to load while doing scroll-to-anchor. r=bz, a=lmandel - 9b93e6033d5d |
http://release.mozilla.org/statistics/37/2015/03/25/fx-37-b6-to-b7.html
|
Mozilla Open Policy & Advocacy Blog: Information sharing debates continuing in problematic directions |
Recently, the U.S. Senate Select Committee on Intelligence held a closed-door hearing to markup the Cybersecurity Information Sharing Act (CISA). Mozilla has previously opposed CISA and its predecessor CISPA, and these changes do not alleviate our concerns. Simultaneously, in neighboring Canada, an aggressive counterterrorism bill would introduce similarly problematic surveillance provisions, among other harms.
But first, CISA. While the newly marked up version includes some improvements over the discussion draft circulated earlier this year, the substantive dangers remain. In particular, the bill:
But the flaws of CISA are more than just the sum of its problematic provisions. The underlying paradigm of information sharing as a means to “detect and respond” or “detect and prevent” cybersecurity attacks lends itself more to advancing surveillance than to improving the security of the Web or its users. The primary threat we face is not a dearth of information shared with or by the government, but rather is often a lack of proactive, common sense security measures.
Moreover, data collected is data at risk, from the government’s failures to secure its own systems to the abuses revealed by the Snowden revelations. Putting more and more information into the hands of the government puts more user data in danger. Nevertheless, after passing the Senate Select Committee on Intelligence 14-1, CISA is scheduled to move to the full Senate floor imminently. This is a bad step forward for the future of the open Web.
Meanwhile in Canada, the Canadian Parliament is considering an even more concerning bill, C-51, the Anti-Terrorism Act of 2015. C-51 is sweeping in scope, including granting Canadian intelligence agencies CSIS and CSE new authority for offensive online attacks, as well as allowing these agencies to obtain significant amounts of information held by the Canadian government. The open-ended internal information-sharing exceptions contained in the bill erode the relationship between individuals and their government by removing the compartmentalization that allows Canadians to provide the government some of their most private information (for census, tax compliance, health services, and a range of other purposes) and trust that that information will be used for only its original purposes. This compartmentalization, currently a requirement of the Privacy Act, will not exist after Bill C-51 comes into force.
The Bill further empowers CSIS to take unspecified and open-ended “measures,” which may include the overt takedown of websites, attacks on Internet infrastructure, introduction of malware, and more all without any judicial oversight. These kinds of attacks on the integrity and availability of the Web make us all less secure.
We hope that both the Canadian Parliament and the U.S. Congress will take the time to hear from users and experts before pushing any further with C-51 and CISA respectively. Both of these bills emphasize nearly unlimited information sharing, without adequate privacy safeguards, and alarmingly provide support for cyberattacks. This is an approach to cybersecurity that only serves to undermine user trust, threaten the openness of the Web, and reduce the security of the Internet and its users. For these reasons, we strongly oppose both C-51 and CISA.
|
Carsten Book: First overview from the sheriff survey! |
Hi,
thanks for all the Reply’s we got for the Sheriff Survey! If you haven’t already took part in it, its still online and you can still take part in the survey!
While we close the Survey in a few days and i will provide a comprehensive overview of course, i was feeling i could already do some quick overview what we got so far.
One big take away is how important checkin-needed requests is and how many people depend on this. We are very sorry if there are delays with picking up checkin-needed requests but since its a human task it depend how much is ongoing with the trees etc.
But there is work being done on Autoland like on https://wiki.mozilla.org/Auto-tools/Projects/Autoland
Also to follow up on 2 concrete things (you might know or maybe not).
Question: How do i know why the tree is closed (when we have a tree closure) on Treeherder
Answer: Just hover over the repo name in Treeherder (as example mozilla-inbound) or click on the info button right next to the repo name
Question: When i land something on like mozilla-inbound its a mess to manually copy and past the hg changeset url to bug
Answer: We have a tool called mcmerge its right next to every push in the drown-down arrow action menu and unlike the name says its not just to mark merges. During the survey we found out that the name is misleading so we trying to find a new name – https://bugzilla.mozilla.org/show_bug.cgi?id=1145836
Thanks,
– Tomcat
https://blog.mozilla.org/tomcat/2015/03/24/first-overview-from-the-sheriff-survey/
|
Jim Chen: Back from leave |
Back in January, I left on a two-month-long leave from Mozilla, in order to do some traveling in China and Japan. Now I'm finally back! I was in China for 1.5 months and in Japan for 2 weeks, and it was amazing! I made a short video highlighting parts of my trip:
Being a mobile developer, I naturally paid some attention to mobile phone usage in China, and how it's different from what I'm used to in the U.S. The cellular infrastructure was impressive. It was fairly cheap, and I was getting full 3G/4G service in small villages and along high-speed rail routes. It seemed like everyone had a smartphone, too. I would see grandmas standing on the side of the road checking their phones.
I never use QR codes in the U.S., but I actually used them quite often in China. For example, you would scan another person's QR code to add them as friends on Wechat. In some places, you could scan a merchant's QR code to pay that merchant using Alipay, a wallet app. Many types of tickets like train tickets and movie tickets also use QR codes over there.
Everyone used Wechat, a messaging app that's “way better than anything else in the U.S.” according to my American friend living in China. It's more than just a messaging app though – you have a “friend circle” that you can post to, a la Facebook; you can also follow “public accounts”, a la Twitter. The app has integrated wallet functionality: I paid for a train ticket and topped up my phone using the app; during Chinese New Year, people were sending each other cash gifts through it.
For some reasons, you see a lot of these “all-in-one” apps in China. I used Baidu Maps during my travel, which does maps and navigation. However, you can also call taxis from within the app or hire a “private car”, a la Uber. You can use the app like Yelp to find nearby restaurants by type and reviews. While you're at it, the app lets you find “group buy” discounts to these restaurants, a la Groupon. I have to say it was super convenient. After I came back to the States, I wasn't used to using Google Maps anymore because it didn't do as much.
Of course, on the flip side, these apps probably would be less popular without the Internet censorship that's so prevalent over there. By creating a barrier for foreign companies to enter the Chinese market, it provided opportunities for domestic companies to create and adapt copycat products. I found it amusing that Android is so prevalent in the Chinese smartphone market, but everything Google is blocked. As a result, you have all these third-party markets that may or may not be legitimate. Mobile malware seems to be a much larger issue in China than in the U.S., because people have to find their apps off of random markets/websites. It was strange to see an apps market promising “safe, no malware” with every download link. Also amusingly, every larger app I saw came with its own updater, again because these apps could not count on having a market to provide update service.
Overall, the trip was quite eye-opening, to see China's tremendous development from multiple angles. I loved Japan, too; I felt it was a lot different from both China and the U.S. Maybe I'll write about Japan in another post.
Last modified: 2015/03/25 11:35
|
David Weir (satdav): Windows Nighly 64 bit test day |
Why not come along to the windows 64 bit nighly test day this Saturday from 9am to 3pm
PS we are looking for moderators at the event
https://etherpad.mozilla.org/testday-20150328
https://satdavmozilla.wordpress.com/2015/03/24/windows-nighly-64-bit-test-day/
|
Smokey Ardisson: What year is it again? |
The other day, my brother asked me to log in to his account on his employer’s1 “HR system” in order to make him some backup copies of information presented there (his existing copies of which he had needed to provide to his supervisor). On the login screen, I was still slightly shocked2 to see the following message:
For an optimal experience, we recommend using these browsers:Unexpected results may occur when using other browsers.
(If you view the source, you can see that each of the s has an
id="ielink_001"
attribute—not only incorrect, but perhaps a holdover from the days this particular website “supported” only IE?)
Seriously? It’s 2015 and your website is not only not compatible with any version of Safari, but it is only compatible with versions of Chrome and Firefox that are four3 versions out-of-date!? (Kudos for supporting versions of IE dating back six years, though!)
I forged ahead, because if the site claimed to work properly in a six-year-old version of Internet Explorer, it surely would work in a current two-year-old version of Safari (the just-released version 6.2.4 on 10.8/Mountain Lion). Nothing I had to look at seemed to look or function incorrectly—until it came time to look for his timesheets. When I clicked on the tab entitled “Timesheets”, a page loaded with no “content” below the row of tabs, except for a link to help me return to the site I was already on. Indeed, unexpected results may occur when using a browser other than the last four versions of IE or versions of Chrome and Firefox four versions out-of-date! Eventually, I realized that the problem was that loading the page was triggering a pop-up window(!?) with the website for the company’s scheduling system, and Safari was (silently) blocking said pop-up.4
Allowing pop-ups and forging ahead again, I looked at the scheduling system’s website, and it reminded me of a poor knockoff of the web as rendered by Firebird 0.6 or 0.7 more than a decade ago (eerie, that poorly-rendered, overly-fat Helvetica—perhaps it’s Verdana or Tahoma?—and
s.
These are websites/systems that are created and installed to be used by every employee of this company, from the convenience of each employee’s personal computing device, not systems that are to be used solely by the HR department on company computers where IT can mandate a certain browser and software combination. This is software whose purpose is to be used by everyone; why is it not designed to be used by everyone—compatible with current versions of the major rendering engines, avoiding unfriendly and abused technologies like pop-ups, and so on?
If the software is intended to be used by everyone (or, generally, people beyond those whose computer configuration you can dictate by supplying said computer) and it’s web-based software (or has a web front-end), then the company (or the company’s software vendor) needs to continually test the software/web front-end with new versions of major rendering engines, making changes (or reporting bugs in the rendering engine) in the unlikely event something breaks, so that they aren’t requiring employees to use six-month-old versions of browsers in order for the corporate software to work properly.
As for the integration between the main HR system and the scheduling system, if the two can’t talk to each other directly behind the scenes, then why not embed the scheduling system into the “Timesheets” tab with an (
s are already present in some of the other tabs). If an
won’t work for some technical or security reasons, why not include a button on the “Timesheets” tab that the user can click to trigger the pop-up window with the scheduling system, thus escaping the pop-up blocker? It’s not as elegant in some ways as automatically launching, but pop-ups are already not as elegant as showing the data inline (and pop-ups are arguably not elegant at all), and manually-triggered pop-ups are more friendly since the human involved knows he or she is triggering some action and isn’t annoyed by blocked pop-up notifications. You also then get Safari compatibility “for free” without requiring users to change settings (and without having to tell them how to do so). If there are still legitimate reasons not to use a button or link or similar element, at the very least some explanatory text in the “content” section of the “Timesheets” tab is far more useful to anyone than a link to return to the very site you’re already viewing.
When I encounter software like this, I often wonder how it was built. Was there a user experience or human interface designer as part of the team? Was there any testing? Any quality assurance team involved? Or did some product manager just throw a spec sheet from marketing at the software engineers and tell them, “Not only do you have to write the code to make it do these things, but you have to determine how it’s going to do these things, too.” Or did management decide to ship as-is, perhaps over the objections of team members, in order to meet some deadline?
Design is how things work. Not everyone is a good designer, just like not everyone is a good programmer or tester (they’re not necessarily mutually exclusive, but many times excelling in one field means not learning as much about another), but every good piece of software needs all three skillsets, working in concert, whether in one body or more. Too often, “corporate software” like this seems to be missing one or more of the three, and that’s a shame, because with a little more effort, every interaction with the software could be improved. Then the vendor sells better software, the employees who use the software have a faster, easier experience and can get back to doing what they love and are good at, and the company installing the software can have happier employees. Everyone wins.
http://www.ardisson.org/afkar/2015/03/24/what-year-is-it-again/
|
Michael Kaply: Firefox ESR Only Changes |
There are a few changes that are coming for Firefox that will be major headaches for enterprise, educational, government and other institutional deployments. These include the removal of the distribution/bundles directory as well as the requirement for all add-ons to be signed by Mozilla.
Given that these two changes are not needed for enterprise, there has been some discussion of not putting these changes into the Firefox ESR.
So I'm curious: besides these two changes, what other things do you think should be different between regular Firefox and the Firefox ESR? I'm not talking about creating new features for the ESR, I'm only talking about enabling and/or disabling features.
Put your suggestions in the comments. I'll put mine there as well.
|
Daniel Pocock: The easiest way to run your own OpenID provider? |
A few years ago, I was looking for a quick and easy way to run OpenID on a small web server.
A range of solutions were available but some appeared to be slightly more demanding than what I would like. For example, one solution required a servlet container such as Tomcat and another one required some manual configuration of Python with Apache.
I came across the SimpleID project. As the name implies, it is simple. It is written in PHP and works with the Apache/PHP environment on just about any Linux web server. It allows you to write your own plugin for a user/password database or just use flat files to get up and running quickly with no database at all.
This seemed like the level of simplicity I was hoping for so I created the Debian package of SimpleID. SimpleID is also available in Ubuntu.
Thanks to a contribution from Jean-Michel Nirgal Vourg`ere, I've just whipped up a 0.8.1-14 package that should fix Apache 2.4 support in jessie. I also cleaned up a documentation bug and the control file URLs.
Nonetheless, it may be helpful to get feedback from other members of the community about the future of this package:
One reason I chose SimpleID is because of dynalogin, the two-factor authentication framework. I wanted a quick and easy way to use OTP with OpenID so I created the SimpleID plugin for dynalogin, also available as a package.
I also created the LDAP backend for SimpleID, that is available as a package too.
I tested SimpleID for login to a Drupal account when the OpenID support is enabled in Drupal, it worked seamlessly. I've also tested it with a few public web sites that support OpenID.
http://danielpocock.com/the-easiest-way-to-run-your-own-openid
|
Adam Lofting: 2015 Mozilla Foundation Metrics Strategy(ish) & Roadmap(ish) |
I wrote a version of this strategy in January but hadn’t published it as I was trying to remove those ‘ish‘s from the title. But the ‘ish’ is actually a big part of my day-to-day work, so this version embraces the ‘ish’.
These are ironically, more qualitative than quantitative.
Those are my goals.
In many cases, the ultimate measure of success is when this work is done by the team rather than by me for the team.
Process and culture feed off of and influence each other. Processes must suit the culture being cultivated. A data driven culture can blinker creativity – it doesn’t have to, but it can. And a culture that doesn’t care for data, won’t care for processes related to data. This strategy aims to balance the needs of both.
I tried to write one, but basically this strategy will respond to the roadmaps of each of the MoFo teams.
Plus: supporting teams to implement our data practices, and of course, the unknown unknowns.
…ish
http://feedproxy.google.com/~r/adamlofting/blog/~3/5iKWjibmT5A/
|
Nigel Babu: Dino Cufflinks |
Recently, in a moment of weakness, I made an order on Etsy for custom cufflinks. I had no idea how it would turn out, so it was a huge leap of faith. I got it the other day and it looks gorgeous!
For those of you wondering, I ordered it from LogiCuff. So, when can we get cufflinks on Mozilla Gear? :)
|
Ben Kelly: Service Workers in Firefox Nightly |
I’m pleased to announce that we now recommend normal Nightly builds for testing our implementation of Service Workers. We will not be posting any more custom builds here.
Now that bug 1110814 has landed in mozilla-central, Nightly has roughly the same functionality as the last sw-build. Just enable these preferences in about:config:
dom.caches.enabled
to true.dom.serviceWorkers.enabled
to true.Please note that on Firefox OS you must enable an additional preference as well. See bug 1125961 for details.
In addition, we’ve decided to move forward with enabling the Service Worker and Cache API preferences by default in non-releases builds. We expect the Cache preference to be enabled in the tree today. The Service Worker preference should be enabled within the next week once bug 931249 is complete.
When Nightly merges to Aurora (Developer Edition), these preferences will also be enabled by default there. They will not, however, ride the trains to Beta or Release yet. We feel we need more time stabilizing the implementation before that can occur.
So, unfortunately, I cannot tell you exactly which Firefox Release will ship with Service Workers yet. It will definitely not be Firefox 39. Its possible Service Workers will ship in Firefox 40, but its more likely to finally be enabled in Firefox 41.
Developer Edition 39, however, will have Cache enabled and will likely also have Service Workers enabled.
Finally, while the code is stabilizing you may see Service Worker registrations and Cache data be deleted when you update the browser. If we find that the data format on disk needs to change we will simply be reseting the relevant storage area in your profile. Once the decision to ship is made any future changes will then properly migrate data without any loss. Again, this only effects Service Worker registrations and data stored in Cache.
As always we appreciate your help testing, reporting bugs, and implementing code.
https://blog.wanderview.com/blog/2015/03/24/service-workers-in-firefox-nightly/
|
Gervase Markham: How to Responsibly Publish a Misissued SSL Certificate |
I woke up this morning wanting to write a blog post, then I found that someone else had already written it. Thank you, Andrew.
If you succeed in getting a certificate misissued to you, then that has the opportunity to be a great learning experience for the site, the CA, the CAB Forum, or all three. Testing security is, to my mind, generally a good thing. But publishing the private key turns it from a great learning experience into a browser emergency update situation (at least at the moment, in Firefox, although we are working to make this easier with OneCRL).
Friends don’t publish private keys for certs for friends’ domain names. Don’t be that guy. :-)
http://feedproxy.google.com/~r/HackingForChrist/~3/ojsvaCXTiJo/
|
Byron Jones: happy bmo push day! |
the following changes have been pushed to bugzilla.mozilla.org:
discuss these changes on mozilla.tools.bmo.
https://globau.wordpress.com/2015/03/24/happy-bmo-push-day-132/
|
Chris Double: Contributing to Servo |
Servo is a web browser engine written in the Rust programming language. It is being developed by Mozilla. Servo is open source and the project is developed on github.
I was looking for a small project to do some Rust programming and Servo being written in Rust seemed likely to have tasks that were small enough to do in my spare time yet be useful contributions to the project. This post outlines how I built Servo, found issues to work on, and got them merged.
The Servo README has details on the pre-requisites needed. Installing the pre-requisites and cloning the repository on Ubuntu was:
$ sudo apt-get install curl freeglut3-dev \
libfreetype6-dev libgl1-mesa-dri libglib2.0-dev xorg-dev \
msttcorefonts gperf g++ cmake python-virtualenv \
libssl-dev libbz2-dev libosmesa6-dev
...
$ git clone https://github.com/servo/servo
The Rust programming language has been fairly volatile in terms of language and library changes. Servo deals with this by requiring a specific git commit of the Rust compiler to build. The Servo source is periodically updated for new Rust versions. The commit id for Rust that is required to build is stored in the rust-snapshot-hash file in the Servo repository.
If the Rust compiler isn’t installed already there are two options for building Servo. The first is to build the required version of Rust yourself, as outlined below. The second is to let the Servo build system, mach
, download a binary snapshot and use that. If you wish to do the latter, and it may make things easier when starting out, skip this step to build Rust.
$ cat servo/rust-snapshot-hash
d3c49d2140fc65e8bb7d7cf25bfe74dda6ce5ecf/rustc-1.0.0-dev
$ git clone https://github.com/rust-lang/rust
$ cd rust
$ git checkout -b servo d3c49d2140fc65e8bb7d7cf25bfe74dda6ce5ecf
$ ./configure --prefix=/home/myuser/rust
$ make
$ make install
Note that I configure Rust to be installed in a directory off my home directory. I do this out of preference to enable managing different Rust versions. The build will take a long time and once built you need to add the prefix directories to the PATH
:
$ export PATH=$PATH:/home/myuser/rust/bin
$ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/myuser/rust/lib
There is a configuration file used by the Servo build system to store information on what Rust compiler to use, whether to use a system wide Cargo (Rust package manager) install and various paths. This file, .servobuild
, should exist in the root of the Servo source that was cloned. There is a sample file that can be used as a template. The values I used were:
[tools]
system-rust = true
system-cargo = false
[build]
android = false
debug-mozjs = false
If you want to use a downloaded binary snapshot of Rust to build Servo you should set the system-rust
setting to false
. With it set to true
as above it will expect to find a Rust of the correct version in the path.
Servo uses the mach command line interface that is used to build Firefox. Once the .servobuild
is created then Servo can be built with:
$ ./mach build
Servo can be run with:
$ ./mach run http://bluishcoder.co.nz
To run the test suite:
$ ./mach test
The github issue list has three useful labels for finding work. They are:
For my first task I searched for E-easy
issues that were not currently assigned (using the C-assigned
label). I commented in the issue asking if I could work on it and it was then assigned to me by a Servo maintainer.
Fixing the issue involved:
Raising the pull request runs a couple of automated actions on the Servo repository. The first is an automated response thanking you for the changes followed by a link to the external critic review system.
The Servo project uses the Critic review tool. This will contain data from your pull request and any reviews made by Servo reviewers.
To address reviews I made the required changes and committed them to my local branch as seperate commits using the fixup
flag to git commit
. This associates the new commit with the original commit that contained the change. It allows easier squashing later.
$ git commit --fixup=
The changes are then pushed to the github fork and the previously made pull request is automatically updated. The Critic review tool also automatically picks up the change and will associate the fix with the relevant lines in the review.
With some back and forth the changes get approved and a request might be made to squash the commits. If fixup
was used to record the review changes then they will be squashed into the correct commits when you rebase:
$ git fetch origin
$ git rebase --autosquash origin/master
Force pushing this to the fork will result in the pull request being updated. When the reviewer marks this as r+
the merge to master will start automatically, along with a build and test runs. If test failures happen these get added to the pull request and the review process starts again. If tests pass and it merges then it will be closed and the task is done.
A full overview of the process is available on the github wiki under Github and Critic PR handling 101.
The process overhead of committing to Servo is quite low. There are plenty of small tasks that don’t require a deep knowledge of Rust. The first task I worked on was basically a search/replace. The second was more involved, implementing view-source protocol and text/plain handling. The latter allows the following to work in Servo:
$ ./mach run view-source:http://bluishcoder.co.nz
$ ./mach run http://cd.pn/plainttext.txt
The main issues I encountered working with Rust and Servo were:
The things I liked:
I hope to contribute more as time permits.
http://bluishcoder.co.nz/2015/03/24/contributing-to-servo.html
|
Air Mozilla: Mozilla Weekly Project Meeting |
The Monday Project Meeting
https://air.mozilla.org/mozilla-weekly-project-meeting-20150323/
|
Dave Townsend: Making communicating with chrome from in-content pages easy |
As Firefox increasingly switches to support running in multiple processes we’ve been finding common problems. Where we can we are designing nice APIs to make solving them easy. One problem is that we often want to run in-content pages like about:newtab and about:home in the child process without privileges making it safer and less likely to bring down Firefox in the event of a crash. These pages still need to get information from and pass information to the main process though, so we have had to come up with ways to handle that. Often we use custom code in a frame script acting as a middle-man, using things like DOM events to listen for requests from the in-content page and then messaging to the main process.
We recently added a new API to make this problem easier to solve. Instead of needing code in a frame script the RemotePageManager module allows special pages direct access to a message manager to communicate with the main process. This can be useful for any page running in the content area, regardless of whether it needs to be run at low privileges or in the content process since it takes care of listening for documents and hooking up the message listeners for you.
There is a low-level API available but the higher-level API is probably more useful in most cases. If your code wants to interact with a page like about:myaddon
just do this from the main process:
Components.utils.import("resource://gre/modules/RemotePageManager.jsm"); let manager = new RemotePages("about:myaddon");
The manager object is now something resembling a regular process message manager. It has sendAsyncMessage
and addMessageListener
methods but unlike the regular e10s message managers it only communicates with about:myaddon
pages. Unlike the regular message managers there is no option to send synchronous messages or pass cross-process wrapped objects.
When about:myaddon
is loaded it has sendAsyncMessage
and addMessageListener
functions defined in its global scope for regular JavaScript to call. Anything that can be structured-cloned can be passed between the processes
The module documentation has more in-depth examples showing message passing between the page and the main process.
The RemotePageManager module is available in nightlies now and you can see it in action with the simple change I landed to switch about:plugins
to run in the content process. For the moment the APIs only support exact URL matching but it would be possible to add support for regular expressions in the future if that turns out to be useful.
http://www.oxymoronical.com/blog/2015/03/Making-communicating-with-chrome-from-in-content-pages-easy
|
Daniel Stenberg: Fixing the Func KB-460 ‘-key |
I use a Func KB-460 keyboard with Nordic layout – that basically means it is a qwerty design with the Nordic keys for “a"a"o” on the right side as shown on the picture above. (yeah yeah Swedish has those letters fairly prominent in the language, don’t mock me now)
The most annoying part with this keyboard has been that the key repeat on the apostrophe key has been sort of broken. If you pressed it and then another key, it would immediately generate another (or more than one) apostrophe. I’ve sort of learned to work around it with some muscle memory and treating the key with care but it hasn’t been ideal.
This problem is apparently only happening on Linux someone told me (I’ve never used it on anything else) and what do you know? Here’s how to fix it on a recent Debian machine that happens to run and use systemd so your mileage will vary if you have something else:
1. Edit the file “/lib/udev/hwdb.d/60-keyboard.hwdb”. It contains keyboard mappings of scan codes to key codes for various keyboards. We will add a special line for a single scan code and for this particular keyboard model only. The line includes the USB vendor and product IDs in uppercase and you can verify that it is correct with lsusb -v and check your own keyboard.
So, add something like this at the end of the file:
# func KB-460
keyboard:usb:v195Dp2030*
KEYBOARD_KEY_70031=reserved
2. Now update the database:
$ udevadm hwdb –update
3. … and finally reload the tweaks:
$ udevadm trigger
4. Now you should have a better working key and life has improved!
With a slightly older Debian without systemd, the instructions I got that I have not tested myself but I include here for the world:
1. Find the relevant input for the device by “cat /proc/bus/input/devices”
2. Make a very simple keymap. Make a file with only a single line like this:
$ cat /lib/udev/keymaps/func
0x70031 reserved
3 Map the key with ‘keymap’:
$ sudo /lib/udev/keymap -i /dev/input/eventX /lib/udev/keymaps/func
where X is the event number you figured out in step 1.
The related kernel issue.
http://daniel.haxx.se/blog/2015/03/23/fixing-the-func-kb-460-key/
|
Pierros Papadeas: Multiple emails on mozillians.org |
|
Cameron Kaiser: Pwn2Own this Power Mac (plus: IonPower's time is running out) |
However, the two holes used for this year's marvelous and somewhat alarming crack are not exploitable in TenFourFox directly: the SVG navigation fault cannot be effectively used to escalate privileges in TenFourFox's default configuration, and we don't even build the code that uses JavaScript bounds checking. The navigation fault may have other weaponizeable vectors and we do want to roll that fix, but the good news is 31.6 will come out this weekend, so no urgent chemspill is necessary unless I discover some odd way of busting through it between now and then.
I lost about a week of hacking time to one of my occasional bouts of bronchitis, which is pushing IonPower's timing very close to the wire. We need two cycles for 38 to allow localizers to catch up and people to test, and of course somewhere in that timeframe we also have to finish the move from Eric Schmidt is a Poopypants Land Google Code. Over the weekend I got IonPower to pass the test suite in Baseline mode, which is very encouraging, but some of the same problems that doomed PPCBC's Ion work are starting to rear up again.
The biggest problem that recurred is an old one: Ion's allocator is not endian-safe. I get bad indicies off it for stack slots and other in-memory boxed values and all but the simplest scripts either assert deep within Ion's bowels (not our new PowerPC backend) or generate code that is verifiably wrong. Unfortunately, Mozilla doesn't really document Ion's guts anywhere, so I don't know where to start with fixing it, and all the extant Ion backends, even MIPS, are little-endian. Maybe some Mozilla JIT folks are reading and can comment? (See also the post in the JavaScript engine internals group.)
One old problem with bad bailout stack frames, however, is partially solved with IonPower. I say partially because even though the stack frame is sane now, it still crashes, but I have a few ideas about that. However, the acid test is getting Ion code to jump to Baseline, run a bit in Baseline, and then jump back to Ion to finish execution. PPCBC could never manage this without crashing. If IonPower can do no better, there is no point in continuing the porting effort.
Even if this effort runs aground again, that doesn't make IonPower useless. PPCBC may pass the test suite, but some reproducible bugs in Tenderapp indicate that it goes awry in certain extremely-tough-to-debug edge cases, and IonPower (in Baseline mode) does pass the test suite as well now. If I can get IonPower to be as fast or faster than PPCBC even if it can't execute Ion code either, we might ship it anyway as "PPCBC II" in Baseline-only mode to see if it fixes those problems -- I have higher confidence that it will, because it generates much more sane and "correct" output and doesn't rely on the hacks and fragile glue code that PPCBC does in 24 and 31. I have to make this decision sometime mid-April, though, because we're fast approaching EOL for 31.
Also, as of Firefox 38 Mozilla no longer supports gcc 4.6, the compiler which we build with. However, I'm not interested in forcing a compiler change so close to the next ESR, and it appears that we should still be able to get it working on 4.6 with some minor adjustments. That won't be the case for Fx39, if we're even going to bother with that, but fortunately there is a gcc 4.8 in MacPorts and we might even use Sevan's gcc from pkgsrc. Again, the decision to continue will be based on feasibility and how close Electrolysis is to becoming mandatory before 45ESR, which is the next jump after that. For now, TenFourFox 38 is the highest priority.
http://tenfourfox.blogspot.com/2015/03/pwn2own-this-power-mac-plus-ionpowers.html
|