Wladimir Palant: Various RememBear security issues |
Whenever I write about security issues in some password manager, people will ask what I’m thinking about their tool of choice. And occasionally I’ll take a closer look at the tool, which is what I did with the RememBear password manager in April. Technically, it is very similar to its competitor 1Password, to the point that the developers are being accused of plagiarism. Security-wise the tool doesn’t appear to be as advanced however, and I quickly found six issues (severity varies) which have all been fixed since. I also couldn’t fail noticing a bogus security mechanism, something that I already wrote about.
Password managers will often give special powers to “their” website. This is generally an issue, because compromising this website (e.g. via an all too common XSS vulnerability) will give attackers access to this functionality. In case of RememBear, things turned out to be easier however. The following function was responsible for recognizing privileged websites:
isRememBearWebsite() {
let remembearSites = this.getRememBearWebsites();
let url = window.getOriginUrl();
let foundSite = remembearSites.firstOrDefault(allowed => url.indexOf(allowed) === 0, undefined);
if (foundSite) {
return true;
}
return false;
}
We’ll get back to window.getOriginUrl()
later, it not actually producing the expected result. But the important detail here: the resulting URL is being compared against some whitelisted origins by checking whether it starts with an origin like https://remembear.com
. No, I didn’t forget the slash at the end here, there really is none. So this code will accept https://remembear.com.malicious.info/
as a trusted website!
Luckily, the consequences aren’t as severe as with similar LastPass issues for example. This would only give attacker’s website access to the RememBear login token. That token will automatically log you into the user’s RememBear account, which cannot be used to access passwords data however. It will “merely” allow the attacker to manage user’s subscription, with the most drastic available action being deleting the account along with all passwords data.
AutoFill functionality of password managers is another typical area where security issues are found. RememBear requires a user action to activate AutoFill which is an important preventive measure. Also, AutoFill user interface will be displayed by the native RememBear application, so websites won’t have any way of messing with it. I found multiple other aspects of this functionality to be exploitable however.
Most importantly, RememBear would not verify that it filled in credentials on the right website (a recent regression according to the developers). Given that considerable time can pass between the user clicking the bear icon to display AutoFill user interface and the user actually selecting a password to be filled in, one cannot really expect that the browser tab is still displaying the same website. RememBear will happily continue filling in the password however, not recognizing that it doesn’t belong to the current website.
Worse yet, RememBear will try to fill out passwords in all frames of a tab. So if https://malicious.com
embeds a frame from https://mybank.com
and the user triggers AutoFill on the latter, https://malicious.com
will potentially receive the password as well (e.g. via a hidden form). Or even less obvious: if you go to https://shop.com
and that site has third-party frames e.g. for advertising, these frames will be able to intercept any of your filled in passwords.
One point on my list of common AutoFill issues is: Domain name is not “the last two parts of a host name.” On the first glance, RememBear appears to have this done correctly by using Mozilla’s Public Suffix List. So it knows in particular that the relevant part of foo.bar.example.co.uk
is example.co.uk
and not co.uk
. On a closer glance, there are considerable issues in the C# based implementation however.
For example, there is some rather bogus logic in the CheckPublicTLDs()
function and I’m not even sure what this code is trying to accomplish. You will only get into this function for multi-part public suffixes where one of the parts has more than 3 characters – meaning pilot.aero
for example. The code will correctly recognize example.pilot.aero
as being the relevant part of the foo.bar.example.pilot.aero
host name, but it will come to the same conclusion for pilot.aeroexample.pilot.aero
as well. Since domains are being registered under the pilot.aero
namespace, the two host names here actually belong to unrelated domains, so the bug here allows one of them to steal credentials for the other.
The other issue is that the syntax of the Public Suffix List is processed incorrectly. This results for example in the algorithm assuming that example.asia.np
and malicious.asia.np
belong to the same domain, so that credentials will be shared between the two. With asia.np
being the public suffix here, these host names are unrelated however.
When you enter a password on some site, RememBear will offer you to save it – fairly common functionality. However, this will fail spectacularly under some circumstances, and that’s partially due to the already mentioned window.getOriginUrl()
function which is implemented as follows:
if (window.location.ancestorOrigins != undefined
&& window.location.ancestorOrigins.length > 0) {
return window.location.ancestorOrigins[0];
}
else {
return window.location.href;
}
Don’t know what window.location.ancestorOrigins
does? I didn’t know either, it being a barely documented Chrome/Safari feature which undermines referrer policy protection. It contains the list of origins for parent frames, so this function will return the origin of the parent frame if there is any – the URL of the current document is completely ignored.
While AutoFill doesn’t use window.getOriginUrl()
, saving passwords does. So if in Chrome https://evil.com
embeds a frame from https://mybank.com
and the user logs into the latter, RememBear will offer to save the password. But instead of saving that password for https://mybank.com
it will store it for https://evil.com
. And https://evil.com
will be able to retrieve the password later if the user triggers AutoFill functionality on their site. But at least there will be some warning flags for the user along the way…
There was one more issue: the function hostFromString()
used to extract host name from URL when saving passwords was using a custom URL parser. It wouldn’t know how to deal with “unusual” URL schemes, so for data:text/html,foo/example.com://
or about:blank#://example.com
it would return example.com
as the host name. Luckily for RememBear, its content scripts wouldn’t run on any of these URLs, at least in Chrome. In their old (and already phased out) Safari extension this likely was an issue and would have allowed websites to save passwords under an arbitrary website name.
hostFromString()
function regardless.https://palant.de/2019/07/08/various-remembear-security-issues/
|
Cameron Kaiser: TenFourFox FPR15 available |
Also, we now have Korean and Turkish language packs available for testing. If you want to give these a spin, download them here; the plan is to have them go-live at the same time as FPR15. Thanks again to new contributor Tae-Woong Se and, of course, to Chris Trusch as always for organizing localizations and doing the grunt work of turning them into installers.
Not much work will occur on the browser for the next week or so due to family commitments and a couple out-of-town trips, but I'll be looking at a few new things for FPR16, including some minor potential performance improvements and a font subsystem upgrade. There's still the issue of our outstanding JavaScript deficiencies as well, of course. More about that later.
http://tenfourfox.blogspot.com/2019/07/tenfourfox-fpr15-available.html
|
About:Community: Firefox 68 new contributors |
With the release of Firefox 68, we are pleased to welcome the 55 developers who contributed their first code change to Firefox in this release, 49 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:
https://blog.mozilla.org/community/2019/07/05/firefox-68-new-contributors/
|
The Mozilla Blog: Mozilla’s Latest Research Grants: Prioritizing Research for the Internet |
We are very happy to announce the results of our Mozilla Research Grants for the first half of 2019. This was an extremely competitive process, and we selected proposals which address twelve strategic priorities for the internet and for Mozilla. This includes researching better support for integrating Tor in the browser, improving scientific notebooks, using speech on mobile phones in India, and alternatives to advertising for funding the internet. The Mozilla Research Grants program is part of our commitment to being a world-class example of using inclusive innovation to impact culture, and reflects Mozilla’s commitment to open innovation.
We will open a new round of grants in Fall of 2019. See our Research Grant webpage for more details and to sign up to be notified when applications open.
Lead Researchers | Institution | Project Title |
---|---|---|
Valentin Churavy | MIT | Bringing Julia to the Browser |
Jessica Outlaw | Concordia University of Portland | Studying the Unique Social and Spatial affordances of Hubs by Mozilla for Remote Participation in Live Events |
Neha Kumar | Georgia Tech | Missing Data: Health on the Internet for Internet Health |
Piotr Sapiezynski, Alan Mislove, & Aleksandra Korolova | Northeastern University & University of Southern California | Understanding the impact of ad preference controls |
Sumandro Chattapadhyay | The Centre for Internet and Society (CIS), India | Making Voices Heard: Privacy, Inclusivity, and Accessibility of Voice Interfaces in India |
Weihang Wang | State University of New York | Designing Access Control Interfaces for Wasmtime |
Bernease Herman | University of Washington | Toward generalizable methods for measuring bias in crowdsourced speech datasets and validation processes |
David Karger | MIT | Tipsy: A Decentralized Open Standard for a Microdonation-Supported Web |
Linhai Song | Pennsylvania State University | Benchmarking Generic Functions in Rust |
Leigh Clark | University College Dublin | Creating a trustworthy model for always-listening voice interfaces |
Steven Wu | University of Minnesota | DP-Fathom: Private, Accurate, and Communication-Efficient |
Nikita Borisov | University of Illinois, Urbana-Champaign | Performance and Anonymity of HTTP/2 and HTTP/3 in Tor |
Many thanks to everyone who submitted a proposal and helped us publicize these grants.
The post Mozilla’s Latest Research Grants: Prioritizing Research for the Internet appeared first on The Mozilla Blog.
|
Mozilla GFX: moz://gfx newsletter #46 |
Hi there! As previously announced WebRender has made it to the stable channel and a couple of million users are now using it without having opted into it manually. With this important milestone behind us, now is a good time to widen the scope of the newsletter and give credit to other projects being worked on by members of the graphics team.
The WebRender newsletter therefore becomes the gfx newsletter. This is still far from an exhaustive list of the work done by the team, just a few highlights in WebRender and graphics in general. I am hoping to keep the pace around a post per month, we’ll see where things go from there.
Botond has been working on desktop zooming
Jeff Gilbert has been working on power preference options for WebGL.
WebGL has three power preference options available during canvas context creation:
The vast majority of web content implicitly requests “default”. Since we don’t want to regress web content performance, we usually treat “default” like “high-performance”. On macOS with multiple GPUs (MacBook Pro), this means activating the power-hungry dedicated GPU for almost all WebGL contexts. While this keeps our performance high, it also means every last one-off or transient WebGL context from ads, trackers, and fingerprinters will keep the high-power GPU running until they are garbage-collected.
In bug 1562812, Jeff added a ramp-up/ramp-down behavior for “default”: For the first couple seconds after WebGL context creation, things stay on the low-power GPU. After that grace period, if the context continues to render frames for presentation, we migrate to the high-power GPU. Then, if the context stops producing frames for a couple seconds, we release our lock of the high-power GPU, to try to migrate back to the low-power GPU.
What this means is that active WebGL content should fairly quickly end up ramped-up onto the high-power GPU, but inactive and orphaned WebGL contexts won’t keep the browser locked on to the high-power GPU anymore, which means better battery life for users on these machines as they wander the web.
Miko, Matt, Timothy and Dan have worked on improving display list build times
WebRender is a GPU based 2D rendering engine for web written in Rust, currently powering Firefox‘s rendering engine as well as the research web browser servo.
Glenn, Jeff Muizelaar and Jeff Gilbert are investigating WebRender on top of swiftshader or llvmpipe when the avialable GPU featureset is too small or too buggy. The hope is that these emulation layers will help us quickly migrate users that can’t have hardware acceleration to webrender with a minimal amount of specific code (most likely some simple specialized blitting routines in a few hot spots where the emulation layer is unlikely provide optimal speed).
It’s too early to tell at this point whether this experiment will pan out. We can see some (expected) regressions compared to the non-webrender software backend, but with some amount of optimization it’s probable that performance will get close enough and provide an acceptable user experience.
This investigation is important since it will determine how much code we have to maintain specifically for non-accelerated users, a proportion of users that will decrease over time but will probably never quite get to zero.
Nical spent some time in the last quarter familiarizing with pathfinder’s internals and investigating its viability for rendering SVG paths in WebRender/Firefox. He wrote a blog post explaining in details how pathfinder fills paths on the GPU.
The outcome of this investigation is that we are optimistic about pathfinder’s approach and think it’s a good fit for integration in Firefox. In particular the approach is flexible and robust, and will let us improve or replace parts in the longer run.
Nical is also experimenting with a different tiling algorithm, aiming at reducing the CPU overhead.
The next steps are to prototype an integration of pathfinder 3 in WebRender, start using it for simple primitives such as CSS clip paths and gradually use it with more primitives.
Glenn continues his work on picture caching. At the moment WebRender only triggers the optimization for a single scroll root. Glenn has landed a lot of infrastructure work to be able to benefit from picture caching at a more granular level (for example one per scroll-root), and pave the way for rendering picture cached slices directly into OS compositor surfaces.
The next steps are, roughly:
Thanks to the continuous efforts of Jamie, Kats, WebRender is starting to be pretty solid on android. GeckoView is required to enable WebRender so it isn’t possible to enable it now in Firefox for Android, but we plan make the option available for the Firefox Preview browser (which is powered by GeckoView) when it will have nightly builds.
https://mozillagfx.wordpress.com/2019/07/05/moz-gfx-newsletter-46/
|
Mozilla Reps Community: Rep of the Month – June 2019 |
Please join us in congratulating Pranshu Khanna, Rep of the Month for June 2019!
Pranshu is from Surat, Gujarat, India. His journey started with a Connected Devices workshop in 2016, since then he’s been a super active contributor and a proud Mozillian. He joined the Reps Program in March 2019 and has been instrumental ever since.
In addition to that, he’s been one of the most active Reps in his region since he joined the program. He has worked to get his local community, Mozilla Gujarat, to meet very regularly and contribute to Common Voice, BugHunter, Localization, SUMO, A-Frame, Add-ons, and Open Source Contribution. He’s an active contributor and a maintainer of the OpenDesign Repository and frequently contributes to the Mozilla India & Mozilla Gujarat Social Channels.
Congratulations and keep rocking the open web!
https://blog.mozilla.org/mozillareps/2019/07/05/rep-of-the-month-june-2019/
|
Mozilla Localization (L10N): L10n report: July edition |
Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet.
New localizers
Are you a locale leader and want us to include new members in our upcoming reports? Contact us!
As usual, let’s start with some important dates:
Firefox 69 has been a lightweight release in terms of new strings, with most of the changes coming from Developer Tools. Expect a lot more content in Firefox 70, with particular focus on the Password Manager – integrating most of the Lockwise add-on features directly into Firefox – and work around Tracking Protection and Security, with brand new about pages.
Since our last report, we’ve shipped the first release of Firefox Preview (Fenix) in 11 languages (including en-US). The next upcoming step will be to open up the project to more locales. If you are interested, make sure to follow closely the dev.l10n mailing list this week. And congratulations to the teams that helped make this a successful localized first release!
On another note, Firefox for iOS deadline for l10n work on v18 just passed this week, and we’re adding two new locales: Finnish and Gujarati. Congratulations to the teams!
The current policy per branding and legal teams is that the trademarked names should be left unchanged. Unless indicated in the comments, brand names should be left in English, they cannot be transliterated or declined. If you have any doubt, ask through the available communication channels.
Product names such as Lockwise, Monitor, Send, and Screenshot, to name a few, used alone or with Firefox, should be left unchanged. Pocket is a trademark and must remain as is too. Locale managers, please inform the rest of the communities on the policy especially to new contributors. Search in Pontoon or Transvision for these names and revert them to English.
The recently added new pages contain marketing languages to promote all the Firefox products. Localize them and start spreading the words on social media and other platforms. The deadline is indicative.
The Firefox marketing guides are living documents and will be updated as needed.
Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.
https://blog.mozilla.org/l10n/2019/07/04/l10n-report-july-edition/
|
The Rust Programming Language Blog: Announcing Rust 1.36.0 |
The Rust team is happy to announce a new version of Rust, 1.36.0. Rust is a programming language that is empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, getting Rust 1.36.0 is as easy as:
$ rustup update stable
If you don't have it already, you can get rustup
from the appropriate page on our website,
and check out the detailed release notes for 1.36.0 on GitHub.
This release brings many changes, including the stabilization of the Future
trait,
the alloc
crate, the MaybeUninit
type, NLL for Rust 2015,
a new HashMap
implementation, and --offline
support in Cargo.
Read on for a few highlights, or see the detailed release notes for additional information.
Future
is here!In Rust 1.36.0 the long awaited Future
trait has been stabilized!
With this stabilization, we hope to give important crates, libraries,
and the ecosystem time to prepare for async
/ .await
,
which we'll tell you more about in the future.
alloc
crate is stableBefore 1.36.0, the standard library consisted of the crates std
, core
, and proc_macro
.
The core
crate provided core functionality such as Iterator
and Copy
and could be used in #![no_std]
environments since it did not impose any requirements.
Meanwhile, the std
crate provided types like Box
and OS functionality
but required a global allocator and other OS capabilities in return.
Starting with Rust 1.36.0, the parts of std
that depend on a global allocator, e.g. Vec
,
are now available in the alloc
crate. The std
crate then re-exports these parts.
While #![no_std]
binaries using alloc
still require nightly Rust,
#![no_std]
library crates can use the alloc
crate in stable Rust.
Meanwhile, normal binaries, without #![no_std]
, can depend on such library crates.
We hope this will facilitate the development of a #![no_std]
compatible ecosystem of libraries
prior to stabilizing support for #![no_std]
binaries using alloc
.
If you are the maintainer of a library that only relies on some allocation primitives to function,
consider making your library #[no_std]
compatible by using the following at the top of your lib.rs
file:
#![no_std]
extern crate alloc;
use alloc::vec::Vec;
MaybeUninit
instead of mem::uninitialized
In previous releases of Rust, the mem::uninitialized
function has allowed you to bypass Rust's
initialization checks by pretending that you've initialized a value at type T
without doing anything.
One of the main uses of this function has been to lazily allocate arrays.
However, mem::uninitialized
is an incredibly dangerous operation that essentially
cannot be used correctly as the Rust compiler assumes that values are properly initialized.
For example, calling mem::uninitialized::
causes instantaneous undefined behavior
as, from Rust's point of view, the uninitialized bits are neither 0
(for false
)
nor 1
(for true
) - the only two allowed bit patterns for bool
.
To remedy this situation, in Rust 1.36.0, the type MaybeUninit
has been stabilized.
The Rust compiler will understand that it should not assume that a MaybeUninit
is a properly initialized T
.
Therefore, you can do gradual initialization more safely and eventually use .assume_init()
once you are certain that maybe_t: MaybeUninit
contains an initialized T
.
As MaybeUninit
is the safer alternative, starting with Rust 1.38,
the function mem::uninitialized
will be deprecated.
To find out more about uninitialized memory, mem::uninitialized
,
and MaybeUninit
, read Alexis Beingessner's blog post.
The standard library also contains extensive documentation about MaybeUninit
.
In the announcement for Rust 1.31.0, we told you about NLL (Non-Lexical Lifetimes), an improvement to the language that makes the borrow checker smarter and more user friendly. For example, you may now write:
fn main() {
let mut x = 5;
let y = &x;
let z = &mut x; // This was not allowed before 1.31.0.
}
In 1.31.0 NLL was stabilized only for Rust 2018, with a promise that we would backport it to Rust 2015 as well. With Rust 1.36.0, we are happy to announce that we have done so! NLL is now available for Rust 2015.
With NLL on both editions, we are closer to removing the old borrow checker. However, the old borrow checker unfortunately accepted some unsound code it should not have. As a result, NLL is currently in a "migration mode" wherein we will emit warnings instead of errors if the NLL borrow checker rejects code the old AST borrow checker would accept. Please see this list of public crates that are affected.
To find out more about NLL, MIR, the story around fixing soundness holes, and what you can do about the warnings if you have them, read Felix Klock's blog post.
HashMap
implementationIn Rust 1.36.0, the HashMap
implementation has been replaced
with the one in the hashbrown
crate which is based on the SwissTable design.
While the interface is the same, the HashMap
implementation is now
faster on average and has lower memory overhead.
Note that unlike the hashbrown
crate,
the implementation in std
still defaults to the SipHash 1-3 hashing algorithm.
--offline
support in CargoDuring most builds, Cargo doesn't interact with the network. Sometimes, however, Cargo has to. Such is the case when a dependency is added and the latest compatible version needs to be downloaded. At times, network access is not an option though, for example on an airplane or in isolated build environments.
In Rust 1.36, a new Cargo flag has been stabilized: --offline
.
The flag alters Cargo's dependency resolution algorithm to only use locally cached dependencies.
When the required crates are not available offline, and a network access would be required,
Cargo will exit with an error.
To prepopulate the local cache in preparation for going offline,
use the cargo fetch
command, which downloads all the required dependencies for a project.
To find out more about --offline
and cargo fetch
, read Nick Cameron's blog post.
For information on other changes to Cargo, see the detailed release notes.
The dbg!
macro now supports multiple arguments.
Additionally, a number of APIs have been made const
:
New APIs have become stable, including:
task::Waker
and task::Poll
VecDeque::rotate_left
and VecDeque::rotate_right
Read::read_vectored
and Write::write_vectored
Iterator::copied
BorrowMut for String
str::as_mut_ptr
Other library changes are available in the detailed release notes.
Detailed 1.36.0 release notes are available for Rust, Cargo, and Clippy.
Many people came together to create Rust 1.36.0. We couldn't have done it without all of you. Thanks!
|
Will Kahn-Greene: Crash pings (Telemetry) and crash reports (Socorro/Crash Stats) |
I keep getting asked questions that stem from confusion about crash pings and crash reports, the details of where they come from, differences between the two data sets, what each is currently good for, and possible future directions for work on both. I figured I'd write it all down.
This is a brain dump and sort of a blog post and possibly not a good version of either. I desperately wished it was more formal and mind-blowing like something written by Chutten or Alessio.
It's likely that this is 90% true today but as time goes on, things will change and it may be horribly wrong depending on how far in the future you're reading this. As I find out things are wrong, I'll keep notes. Any errors are my own.
Table of contents because this is long
We (Mozilla) have two different data sets for crashes: crash pings in Telemetry and crash reports in Socorro/Crash Stats. When Firefox crashes, the crash reporter collects information about the crash and this results in crash ping and crash report data. From there, the two different data things travel two different paths and end up in two different systems.
This blog post covers these two different journeys, their destinations, and the resulting properties of both data sets.
This blog post specifically talks about Firefox and not other products which have different crash reporting stories.
Firefox crashes. It happens.
The crash reporter kicks in. It uses the Breakpad library to collect data about the crashed process, package it up into a minidump. The minidump has information about the registers, what's in memory, the stack of the crashing thread, stacks of other threads, what modules are in memory, and so on.
Additionally, the crash reporter collects a set of annotations for the crash. Annotations like ProductName, Version, ReleaseChannel, BuildID and others help us group crashes for the same product and build.
The crash reporter assembles the portions of the crash report that don't have personally identifiable information (PII) in them into a crash ping. It uses minidump-analyzer to unwind the stack. The crash ping with this stack is sent via the crash ping sender to Telemetry.
If Telemetry is disabled, then the crash ping will not get sent to Telemetry.
The crash reporter will show a crash report dialog informing the user that Firefox crashed. The crash report dialog includes a comments field for additional data about the crash and an email field. The user can choose to send the crash report or not.
If the user chooses to send the crash report, then the crash report is sent via HTTP POST to the collector for the crash ingestion pipeline. The entire crash ingestion pipeline is called Socorro. The website part is called Crash Stats.
If the user chooses not to send the crash report, then the crash report never goes to Socorro.
If Firefox is unable to send the crash report, it keeps it on disk. It might ask the user to try to send it again later. The user can access about:crashes and send it explicitly.
Before we get too much further, let's talk about symbolication.
minidump-whatever will walk the stack starting with the top-most frame. It uses frame information to find the caller frame and works backwards to produce a list of frames. It also includes a list of modules that are in memory.
For example, part of the crash ping might look like this:
"modules": [ ... { "debug_file": "xul.pdb", "base_addr": "0x7fecca50000", "version": "69.0.0.7091", "debug_id": "4E1555BE725E9E5C4C4C44205044422E1", "filename": "xul.dll", "end_addr": "0x7fed32a9000", "code_id": "5CF2591C6859000" }, ... ], "threads": [ { "frames": [ { "trust": "context", "module_index": 8, "ip": "0x7feccfc3337" }, { "trust": "cfi", "module_index": 8, "ip": "0x7feccfb0c8f" }, { "trust": "cfi", "module_index": 8, "ip": "0x7feccfae0af" }, { "trust": "cfi", "module_index": 8, "ip": "0x7feccfae1be" }, ...
The "ip" is an instruction pointer.
The "module_index" refers to another list of modules that were all in memory at the time.
The "trust" refers to how the stack unwinding figured out how to unwind that frame. Sometimes it doesn't have enough information and it does an educated guess.
Symbolication takes the module name, the module debug id, and the offset and looks it up with the symbols it knows about. So for the first frame, it'd do this:
Symbolication does that for every frame and then we end up with a helpful symbolicated stack.
Tecken has a symbolication API which takes the module and stack information in a minimal form and symbolicates using symbols it manages.
It takes a form like this:
{ "memoryMap": [ [ "xul.pdb", "44E4EC8C2F41492B9369D6B9A059577C2" ], [ "wntdll.pdb", "D74F79EB1F8D4A45ABCD2F476CCABACC2" ] ], "stacks": [ [ [0, 11723767], [1, 65802] ] ] }
This has two data structures. The first is a list of (module name, module debug id) tuples. The second is a list of (module id, memory offset) tuples.
Socorro has a signature generator that goes through the stack, normalizes the frames so that frames look the same across platforms, and then uses that to generate a "signature" for the crash that suggests a common cause for all the crash reports with that signature.
It's a fragile and finicky process. It works well for some things and poorly for others. There are other ways to generate signatures. This is the one that Socorro currently uses. We're constantly honing it.
I export Socorro's signature generation system as a Python library called siggen.
For examples of stacks -> signatures, look at crash reports on Crash Stats.
Crash pings are only sent if Telemetry is enabled in Firefox.
The crash ping contains the stack for the crash, but little else about the crashed process. No register data, no memory data, nothing found on the heap.
The stack is unwound by minidump-analyzer in the client on the user's machine. Because of that, driver information can be used by unwinding so for some kinds of crashes, we may get a better stack than crash reports in Socorro.
Stacks in crash pings are not symbolicated.
There's an error aggregates data set generated from the crash pings which is used by Mission Control.
Socorro does not get crash reports if the user chooses not to send a crash report.
Socorro collector discards crash reports for unsupported products.
Socorro collector throttles incoming crash reports for Firefox release channel--it accepts 10% of those for processing and rejects the other 90%.
The Socorro processor runs minidump-stackwalk on the minidump which unwinds the stack. Then it symbolicates the stack using symbols uploaded during the build process to symbols.mozilla.org.
If we don't have symbols for modules, minidump-stackwalk will guess at the unwinding. This can work poorly for crashes that involve drivers and system libraries we don't have symbols for.
Because of the above, there are big differences in collection of crash data between the two systems and what you can do with it.
Because crash ping data doesn't require explicit consent by users on a crash-by-crash basis and crash pings are sent using the Telemetry infrastructure which is pretty resilient to network issues and other problems, crash ping data in Telemetry is likely more representative of crashes happening for our users.
Crash report data in Socorro is limited to what users explicitly send us. Further, there are cases where Firefox isn't able to run the crash reporter dialog to ask the user.
For example, on Friday, June 28th, 2019 for Firefox release channel:
A crash report can have a different stack in the crash ping than in the crash report.
Crash ping data in Telemetry is unwound in the client. On Windows, minidump-analyzer can access CFI unwinding data, so the stacks can be better especially in cases where the stack contains system libraries and drivers.
We haven't implemented this yet on non-Windows platforms.
Crash report data in Socorro is unwound by the Socorro processor and is heavily dependent on what symbols we have available. It doesn't do a good job with unwinding through drivers and we often don't have symbols for Linux system libraries.
Gabriele says sometimes stacks are unwound better for crashes on MacOS and Linux platforms than what the crash ping contains.
Crash ping data is not symbolicated and we're not generating Socorro-style signatures, so it's hard to bucket them and see change in crash rates for specific crashes.
There's an fx-crash-sig Python library which has code to symbolicate crash ping stacks and generate a Socorro-style signature from that stack. This is helpful for one-off analysis but this is not a long-term solution.
Crash report data in Socorro is symbolicated and has Socorro-style signatures.
The consequence of this is that in Telemetry, we can look at crash rates for builds, but can't look at crash rates for specific kinds of crashes as bucketed by signatures.
The Signature Report and Top Crashers Report in Crash Stats can't be implemented in Telemetry (yet).
Telemetry has better tooling for analyzing crash ping data.
Crash ping data drives Mission Control.
Socorro's tooling is limited to Supersearch web ui and API which is ok at some things and not great at others. I've heard some people really like the Supersearch web ui.
There are parts of the crash report which are not searchable. For example, it's not possible to search for crash reports where a certain module is in the stack. Socorro has a signature report and a topcrashers page which help, but they're not flexible for answering questions outside of what we've explicitly coded them for.
Socorro sends a copy of processed crash reports to Telemetry and this is in the "socorro_crash" dataset.
Telemetry crash ping data does not contain PII. It is not publicly available, except in aggregate via Mission Control.
Socorro crash report data contains PII. A subset of the crash data is available to view and search by anyone. The PII data is restricted to users explicitly granted access to it. PII data includes user email addresses, user-provided comments, CPU register data, what else was in memory, and other things.
Telemetry crash ping data isn't expired, but I think that's changing at some point.
Socorro crash report data is kept for 6 months.
Socorro data is near real-time. Crash reports get collected and processed and are available in searches and reports within a few minutes.
Crash ping data gets to Telemetry almost immediately.
Main ping data has some latency between when it's generated and when it is collected. This affects normalization numbers if you were looking at crash rates from crash ping data.
Derived data sets may have some latency depending on how they're generated.
Socorro is still good for deep dives into specific crash reports since it contains the full minidump and sometimes a user email address and user comments.
Socorro has Socorro-style signatures which make it possible to aggregate crash reports into signature buckets. Signatures are kind of fickle and we adjust how they're generated over time as compilers, symbols extraction, and other things change. We can build Signature Reports and Top Crasher reports and those are ok, but they use total counts and not rates.
I want to tackle switching from Socorro's minidump-stackwalk to minidump-analyzer so we're using the same stack walker in both places. I don't know when that will happen.
Socorro is going to GCP which means there will be different tools available for data analysis. Further, we may switch to BigQuery or some other data store that lets us search the stack. That'd be a big win.
Telemetry crash ping data is more representative of the real world, but stacks are symbolicated and there's no signature generation, so you can't look at aggregates by cause.
Symbolication and signature generation of crash pings will get figured out at some point.
Work continues on Mission Control 2.0.
Telemetry is going to GCP which means there will be different tools available for data analysis.
At the All Hands, I had a few conversations about fixing tooling for both crash reports and crash pings so the resulting data sets were more similar and you could move from one to the other. For example, if you notice a troubling trend in the crash ping data, can you then go to Crash Stats and find crash reports to deep dive into?
I also had conversations around use cases. Which data set is better for answering certain questions?
We think having a guide that covers which data set is good for what kinds of analysis, tools to use, and how to move between the data sets would be really helpful.
Many thanks to everyone who helped with this: Will Lachance, W Chris Beard, Gabriele Svelto, Nathan Froyd, and Chutten.
Also, many thanks to Chutten and Alessio who write fantastic blog posts about Telemetry things. Those are gold.
https://bluesock.org/~willkg/blog/mozilla/crash_pings_crash_reports.html
|
Mozilla Reps Community: 8 Years of Reps Program, Celebrating Community Successes! |
The Reps program idea was started in 2010 by William Quiviger and Pierros Papadeas, until officially launched and welcoming volunteers onboard as Mozilla Reps in 2011. The Mozilla Reps program aims to empower and support volunteer Mozillians who want to be official representatives of Mozilla in their region/locale/country. The program provides a framework and a specific set of tools to help Mozillians to organize and/or attend events, recruit and mentor new contributors, document and share activities, and support their local communities better. The Reps program was created to help communities around the world. Community is the backbone of the Mozilla project. As the Mozilla project grows in scope and scale, community needs to be strengthened and empowered accordingly. This is the central aim of the Mozilla Reps program: to empower and to help push responsibility to the edges, in order to help the Mozilla contributor base grow. Nowadays, the Reps are taking a stronger point by becoming the Community Coordinators.
You can learn more about the program here.
Success Stories
Mozilla Reps program proven to be successful to help to identify talents from local communities to contribute to Mozilla projects. Reps also help to run local/international events and helping the campaigns to take place. Some of the campaigns happen in the past, was a collaboration work with other Mozilla teams. These are list of some activities or campaigns which collaborating the Reps program with other Mozilla teams. Until today, we have total 7623 events, 261 active Reps, and 51,635 activities reported on Reps portal.
Historically, the Reps have been supporting major Mozilla products through the whole existence, from the Firefox OS Launch to supporting the latest campaigns. Events that help to promote the launch of new version of Firefox such Firefox 4.0, Firefox Quantum, and every Firefox major release updates. Events that care about users and community such Privacy Month, Aadhar, The Dark Funnel, Techspeakers, Web Compatibility Sprint, Maker Party. Events relate to new product release such Firefox Rocket/Lite and Screenshot Go for South East Asia and India market. Events relate to localization such The Add-on Localization. Events relate to Mozilla’s product such Rust, Webmaker, Firefox OS.
Do You Have More Ideas?
With those many success stories on the past in helping engagement of local communities and helping many different campaigns, Mozilla Reps still looking forward to many different activities and campaigns in future. So, if you have more ideas for campaigns or engagement for local communities, or want to collaborate with Mozilla Reps program to get in touch with local communities, Let’s do it together!
|
Will Kahn-Greene: Socorro Engineering: June 2019 happenings |
Socorro Engineering team covers several projects:
This blog post summarizes our activities in June.
Read more… (5 min remaining to read)
https://bluesock.org/~willkg/blog/mozilla/socorro_2019_06.html
|
The Mozilla Blog: Mozilla joins brief for protection of LGBTQ employees from discrimination |
Last year, we joined the call in support of transgender equality as part of our longstanding commitment to diversity, inclusion and fostering a supportive work environment. Today, we are proud to join over 200 companies, big and small, as friends of the court, in a brief brought to the Supreme Court of the United States.
Proud to reaffirm that everyone deserves to be protected from discrimination, whatever their sexual orientation or gender identity.
Diversity fuels innovation & competition. It's a necessary part of a healthy, supportive workplace and of a healthy internet. https://t.co/T9gXX4kiI5
— Mozilla (@mozilla) July 2, 2019
The brief says, in part:
“Amici support the principle that no one should be passed over for a job, paid less, fired, or subjected to harassment or any other form of discrimination based on their sexual orientation or gender identity. Amici’s commitment to equality is violated when any employee is treated unequally because of their sexual orientation or gender identity. When workplaces are free from discrimination against LGBT employees, everyone can do their best work, with substantial benefits for both employers and employees.”
The post Mozilla joins brief for protection of LGBTQ employees from discrimination appeared first on The Mozilla Blog.
|
Dave Hunt: State of Performance Test Engineering (H1/2019) |
|
Mozilla Open Policy & Advocacy Blog: Building on the UK white paper: How to better protect internet openness and individuals’ rights in the fight against online harms |
In April 2019 the UK government unveiled plans for sweeping new laws aimed at tackling illegal and harmful content and activity online, described by the government as ‘the toughest internet laws in the world’. While the UK government’s proposal contains some interesting avenues of exploration for the next generation of European content regulation laws, it also includes several critical weaknesses and grey areas. We’ve just filed comments with the government that spell out the key areas of concern and provide recommendations on how to address them.
The UK government’s white paper responds to legitimate public policy concerns around how technology companies deal with illegal and harmful content online. We understand that in many respects the current European regulatory paradigm is not fit for purpose, and we support an exploration of what codified content ‘responsibility’ might look like in the UK and at EU-level, while ensuring strong and clear protections for individuals’ free expression and due process rights.
As we have noted previously, we believe that the white paper’s proposed regulatory architecture could have some potential. However, the UK government’s vision to put this model into practice contains serious flaws. Here are some of the changes we believe the UK government must make to its proposal to avoid the practical implementation pitfalls:
We look forward to engaging more with the UK government as it continues its consultation on the Online Harms white paper, and hopefully the recommendations in this filing can help address some of the white paper’s critical shortcomings. As policymakers from Brussels to Delhi contemplate the next generation of online content regulations, the UK government has the opportunity to set a positive standard for the world.
The post Building on the UK white paper: How to better protect internet openness and individuals’ rights in the fight against online harms appeared first on Open Policy & Advocacy.
https://blog.mozilla.org/netpolicy/2019/07/02/building-on-the-uk-online-harms-white-paper/
|
Mike Hommey: Git now faster than Mercurial to clone Mozilla Mercurial repos |
How is that for clickbait?
With the now released git-cinnabar 0.5.2, the cinnabarclone feature is enabled by default, which means it doesn’t need to be enabled manually anymore.
Cinnabarclone is to git-cinnabar what clonebundles is to Mercurial (to some extent). Clonebundles allow Mercurial to download a pre-generated bundle of a repository, which reduces work on the server side. Similarly, Cinnabarclone allows git-cinnabar to download a pre-generated bundle of the git form of a Mercurial repository.
Thanks to Connor Sheehan, who deployed the necessary extension and configuration on the server side, cinnabarclone is now enabled for mozilla-central and mozilla-unified, making git-cinnabar clone faster than ever for these repositories. In fact, under some conditions (mostly depending on network bandwidth), cloning with git-cinnabar is now faster than cloning with Mercurial:
$ time git clone hg::https://hg.mozilla.org/mozilla-unified mozilla-unified_git
Cloning into 'mozilla-unified_git'...
Fetching cinnabar metadata from https://index.taskcluster.net/v1/task/github.glandium.git-cinnabar.bundle.mozilla-unified/artifacts/public/bundle.git
Receiving objects: 100% (12153616/12153616), 2.67 GiB | 41.41 MiB/s, done.
Resolving deltas: 100% (8393939/8393939), done.
Reading 172 changesets
Reading and importing 170 manifests
Reading and importing 758 revisions of 570 files
Importing 172 changesets
It is recommended that you set "remote.origin.prune" or "fetch.prune" to "true".
git config remote.origin.prune true
or
git config fetch.prune true
Run the following command to update tags:
git fetch --tags hg::tags: tag "*"
Checking out files: 100% (279688/279688), done.
real 4m57.837s
user 9m57.373s
sys 0m41.106s
$ time hg clone https://hg.mozilla.org/mozilla-unified
destination directory: mozilla-unified
applying clone bundle from https://hg.cdn.mozilla.net/mozilla-unified/5ebb4441aa24eb6cbe8dad58d232004a3ea11b28.zstd-max.hg
adding changesets
adding manifests
adding file changes
added 537259 changesets with 3275908 changes to 523698 files (+13 heads)
finished applying clone bundle
searching for changes
adding changesets
adding manifests
adding file changes
added 172 changesets with 758 changes to 570 files (-1 heads)
new changesets 8b3c35badb46:468e240bf668
537259 local changesets published
updating to branch default
(warning: large working directory being used without fsmonitor enabled; enable fsmonitor to improve performance; see "hg help -e fsmonitor")
279688 files updated, 0 files merged, 0 files removed, 0 files unresolved
real 21m9.662s
user 21m30.851s
sys 1m31.153s
To be fair, the Mozilla Mercurial repos also have a faster “streaming” clonebundle that they only prioritize automatically if the client is on AWS currently, because they are much larger, and could take longer to download. But you can opt-in with the --stream
command line argument:
$ time hg clone --stream https://hg.mozilla.org/mozilla-unified mozilla-unified_hg
destination directory: mozilla-unified_hg
applying clone bundle from https://hg.cdn.mozilla.net/mozilla-unified/5ebb4441aa24eb6cbe8dad58d232004a3ea11b28.packed1.hg
525514 files to transfer, 2.95 GB of data
transferred 2.95 GB in 51.5 seconds (58.7 MB/sec)
finished applying clone bundle
searching for changes
adding changesets
adding manifests
adding file changes
added 172 changesets with 758 changes to 570 files (-1 heads)
new changesets 8b3c35badb46:468e240bf668
updating to branch default
(warning: large working directory being used without fsmonitor enabled; enable fsmonitor to improve performance; see "hg help -e fsmonitor")
279688 files updated, 0 files merged, 0 files removed, 0 files unresolved
real 1m49.388s
user 2m52.943s
sys 0m43.779s
If you’re using Mercurial and can download 3GB in less than 20 minutes (in other words, if you can download faster than 2.5MB/s), you’re probably better off with the streaming clone.
The Mercurial streaming clone bundle contains data in a form close to what Mercurial puts on disk in the .hg
directory, meaning the size of .hg
is close to that of the clone bundle. The Cinnabarclone bundle contains a git pack, meaning the size of .git
is close to that of the bundle, plus some more for the pack index file that unbundling creates.
The amazing fact is that, to my own surprise, the git pack, containing the repository contents along with all git-cinnabar needs to recreate Mercurial changesets, manifests and files from the contents, takes less space than the Mercurial streaming clone bundle.
And that translates in local repository size:
$ du -h -s --apparent-size mozilla-unified_hg/.hg
3.3G mozilla-unified_hg/.hg
$ du -h -s --apparent-size mozilla-unified_git/.git
3.1G mozilla-unified_git/.git
And because Mercurial creates so many files (essentially, two per file that ever was in the repository), there is a larger difference in block size used on disk:
$ du -h -s mozilla-unified_hg/.hg
4.7G mozilla-unified_hg/.hg
$ du -h -s mozilla-unified_git/.git
3.1G mozilla-unified_git/.git
It’s even more mind blowing when you consider that Mercurial happily creates delta chains of several thousand revisions, when the git pack’s longest delta chain is 250 (set arbitrarily at pack creation, by which I mean I didn’t pick a larger value because it didn’t make a significant difference). For the casual readers, Git and Mercurial try to store object revisions as a diff/delta from a previous object revision because that takes less space. You get a delta chain when that previous object revision itself is stored as a diff/delta from another object revision itself stored as a diff/delta … etc.
My guess is that the difference is mainly caused by the use of line-based deltas in Mercurial, but some Mercurial developer should probably take a deeper look. The fact that Mercurial cannot delta across file renames is another candidate.
|
Mozilla Security Blog: Fixing Antivirus Errors |
After the release of Firefox 65 in December, we detected a significant increase in a certain type of TLS error that is often triggered by the interaction of antivirus software with the browser. Today, we are announcing the results of our work to eliminate most of these issues, and explaining how we have done so without compromising security.
On Windows, about 60% of Firefox users run antivirus software and most of them have HTTPS scanning features enabled by default. Moreover, CloudFlare publishes statistics showing that a significant portion of TLS browser traffic is intercepted. In order to inspect the contents of encrypted HTTPS connections to websites, the antivirus software intercepts the data before it reaches the browser. TLS is designed to prevent this through the use of certificates issued by trusted Certificate Authorities (CAs). Because of this, Firefox will display an error when TLS connections are intercepted unless the antivirus software anticipates this problem.
Firefox is different than a number of other browsers in that we maintain our own list of trusted CAs, called a root store. In the past we’ve explained how this improves Firefox security. Other browsers often choose to rely on the root store provided by the operating system (OS) (e.g. Windows). This means that antivirus software has to properly reconfigure Firefox in addition to the OS, and if that fails for some reason, Firefox won’t be able to connect to any websites over HTTPS, even when other browsers on the same computer can.
The interception of TLS connections has historically been referred to as a “man-in-the-middle”, or MITM. We’ve developed a mechanism to detect when a Firefox error is caused by a MITM. We also have a mechanism in place that often fixes the problems. The “enterprise roots” preference, when enabled, causes Firefox to import any root CAs that have been added to the OS by the user, an administrator, or a program that has been installed on the computer. This option is available on Windows and MacOS.
We considered adding a “Fix it” button to MITM error pages (see example below) that would allow users to easily enable the “enterprise roots” preference when the error is displayed. However, we realized that this was something we want users to do rather than an “override” button that allows a user to bypass an error at their own risk.
Example of a MitM Error Page in Firefox
Beginning with Firefox 68, whenever a MITM error is detected, Firefox will automatically turn on the “enterprise roots” preference and retry the connection. If it fixes the problem, then the “enterprise roots” preference will remain enabled (unless the user manually sets the “security.enterprise_roots.enabled” preference to false). We’ve tested this change to ensure that it doesn’t create new problems. We are also recommending as a best practice that antivirus vendors enable this preference (by modifying prefs.js) instead of adding their root CA to the Firefox root store. We believe that these actions combined will greatly reduce the issues encountered by Firefox users.
In addition, in Firefox ESR 68, the “enterprise roots” preference will be enabled by default. Because extended support releases are often used in enterprise settings where there is a need for Firefox to recognize the organization’s own internal CA, this change will streamline the process of deploying Firefox for administrators.
Finally, we’ve added an indicator that allows the user to determine when a website is relying on an imported root CA certificate. This notification is on the site information panel accessed by clicking the lock icon in the URL bar.
It might cause some concern for Firefox to automatically trust CAs that haven’t been audited and gone through the rigorous Mozilla process. However, any user or program that has the ability to add a CA to the OS almost certainly also has the ability to add that same CA directly to the Firefox root store. Also, because we only import CAs that are not included with the OS, Mozilla maintains our ability to set and enforce the highest standards in the industry on publicly-trusted CAs that Firefox supports by default. In short, the changes we’re making meet the goal of making Firefox easier to use without sacrificing security.
The post Fixing Antivirus Errors appeared first on Mozilla Security Blog.
https://blog.mozilla.org/security/2019/07/01/fixing-antivirus-errors/
|
Mike Hommey: Announcing git-cinnabar 0.5.2 |
Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.
These release notes are also available on the git-cinnabar wiki.
README.md
and mercurial/cinnabarclone.py
.git cinnabar fsck
now does incremental checks against last known good state.
|
Firefox UX: iPad Sketching with GoodNotes 5 |
I’m always making notes and sketching on my iPad and people often ask me what app I’m using. So I thought I’d make a video of how I use GoodNotes 5 in my design practice.
Links:
This video on YouTube
GoodNotes
Templates: blank, storyboard, crazy 8s, iPhone X
Transcript:
Hi, I’m Michael Verdi, and I’m a product designer for Firefox. Today, I wanna talk about how I use sketching on my iPad in my design practice. So, first, sketching on a iPad, what I really like about it is that these apps on the iPad allow me to collect all my stuff in one place. So, I’ve got photos and screenshots, I’ve got handwritten notes, typed up notes, whatever, it’s all together. And I can organize it, and I can search through it and find what I need.
There’s really two basic kind of apps that you can use for this. There’s drawing and sketching apps, and then there’s note-taking apps. Personally, I prefer the note-taking apps because they usually have better search tools and organization. The thing that I like that I’m gonna talk about today is GoodNotes 5.
I’ve got all kinds of stuff in here. I’ve got handwritten notes with photographs, I’ve got some typewritten notes, screenshots, other things that I’ve saved, photographs, again. Yeah, I really like using this. I can do storyboards in here, right, or I can draw things, copy and paste them so that I can iterate quickly, make multiple variations over and over again. I can stick in screenshots and then draw on top of them or annotate them. Right, so, let me do a quick demo of using this to draw.
So, one of the things that I’ll do is actually, maybe I’ve drawn some stuff before, and I’ll save that drawing as an image in my photo library. And then I’ll come stick it in here, and I’ll draw on top of it. So, I work on search, so here’s Firefox with no search box. So, I’m gonna draw one. Let’s use some straight lines to draw one. I’m gonna draw a big search box, but I’m doing it here in the middle because I’m gonna place it a little better in a second. And we have the selection tool, and I’m gonna make the, the selection is not selecting images, right? So, I can come over here and just grab my box, and then I can move my box around on top. Okay, so, I still have this gray line. I can’t erase that because it’s an image. So, I’m gonna come over here, and I’m gonna get some white, and I’m gonna just draw over it. Right, okay. Let’s go back and get my gray color. I can zoom in when I need to, and I’m gonna copy this, and I’m gonna pate it a bunch of times. Then I can annotate this. Right, so, there we go.
Another thing that I really like about GoodNotes is the ability to search through stuff that you’ve done. So, I’m gonna search here, and I’m gonna search for spaces. So, this was a thing that we mocked up with a storyboard. This is it right here. And it recognized my, it read my handwriting, which is really cool. So, I can find this thing in a notebook of a jillion pages. But there’s also another way to find things. So, you have this view here, this is called the outline view. These are sorta like named bookmarks. There’s also a thumbnail view, right? Here’s all the pages in this notebook. But if I go to the outlines, so, here, I did some notes about a critique format, and I can jump right to them. But let’s say this new drawing, well, where did I do it? This new drawing, I wanna be able to get to this all the time, right? So, I can come up here, and I can say Add This Page to the Outline, and now I can give it a name. And I don’t know what I’m gonna call this, so I’m just callin’ it sample for right now. And so, now it is in the outline. Oh, I guess I had already done this as a demo. But there it is. And that’s how I can get to it now. That’s super, super cool.
Okay, and then one last thing I wanna show you about this is templates. So, I actually made this template to better fit my setup here. I’ve got no status bar at the top. And then these are just PDFs. And you can import your own. And I can change the template for this page, and I’ve made a storyboard template. And I can apply that here. And now I’ve got a thing so I can draw a storyboard. Or maybe I don’t wanna do a storyboard, but what else do I have? Oh, I wanna do a crazy eights exercise. So, now I’ve got one ready for a crazy eight exercise. I love these templates, they’re super handy. I’ll include those in the post with this, a link to some of these things.
So, that’s sketching on the iPad with GoodNotes 5. Thanks for watching.
https://blog.mozilla.org/ux/2019/06/ipad-sketching-with-goodnotes-5/
|
Cameron Kaiser: And now for something completely different: NetBSD on the last G4 Mac mini (and making the kernel power failure proof) |
I'm a big fan of NetBSD. I've run it since 2000 on a Mac IIci (of course it's still running it) and I ran it for several years on a Power Mac 7300 with a G3 card which was the second incarnation of the Floodgap gopher server. Today I also still run it on a MIPS-based Cobalt RaQ 2 and an HP Jornada 690. I think NetBSD is a better match for smaller or underpowered systems than current-day Linux, and is fairly easy to harden and keep secure even though none of these systems are exposed to the outside world.
Recently I had a need to set up a bridge system that would be fast enough to connect two networks and I happened to have two of the "secret" last-of-the-line 1.5GHz G4 Mac minis sitting on the shelf doing nothing. Yes, they're probably outclassed by later Raspberry Pi models, but I don't have to buy anything and I like putting old hardware to good use. So here it is, doing serious business, with the total outlay being the cost of one weekend afternoon:
NetBSD/macppc is a fairly mature port, but that doesn't mean it doesn't have bugs. And oddly there do seem to still be some in the install process, at least of the 8.1 release I used, on this last and mightiest of the PowerPC miniatures. Still, once it got set up it's been working great since, so here's a few pointers on getting the 1.5 mini (and current Power Macs generally) running as little NetBSD servers. As most of my readers are Power Mac users and likely to have multiple Power Macs that can aid this process, I will orient this guide to them with some sidebar notes for people trying to pull the mini up all by itself. This machine is configured with 1GB of RAM, the standard 2.5" PATA spinning disk and optical drive, USB 2.0, FireWire 400, Bluetooth and WiFi, using the onboard Radeon 9200 GPU as the console.
The working configuration, hit upon by Sevan Janiyan, is to have an HFS+ partition with the bootloader (called ofwboot.xcf) and the kernel, and then a separate partition for NetBSD's root volume. For some reason the mini goes berserk when trying to boot from a kernel on a NetBSD native partition, but works fine from an HFS+ one. Unfortunately, since the NetBSD installer cannot actually initialize an HFS+ volume, you'll need to do some of the partitioning work in Mac OS X, copy the files there, and then finish the rest. There's a couple ways of skinning that cat, but for many of you this means you'll need not only the NetBSD installer CD, but also a bootable copy of Mac OS X either on disc (i.e., an installer) or a bootable external drive, and some means of copying files.
And that brings us to our first pro tip: the G4 Mac minis had absolutely crap optical drives that would fail if you looked at them crossways. This drive was no exception; it would read commercially pressed CDs but only certain media types of burned CDs and wouldn't read any DVD at all. That means it wouldn't boot from my usual OS X Tiger 10.4.6 install DVD, and the last generation of G4 minis require 10.4, so none of my previous OS X CDs would work.
As it happens, the minimum requirement for the G4/1.5 minis is not actually 10.4.2, yet another Apple lie; it's actually 10.4.0 (though note that some devices like Bluetooth may not work properly). This is fortunate because 10.4.0 was available in CD format and I was able to boot Disk Utility off that Tiger CD instead. Your other option is to bring up the mini in Target Disk Mode (connect over FireWire; hold T down as you turn the mini on until you see a yellow logo on a blue background) from another Power Mac and do the formatting there. In fact, we'll be using Target Disk Mode in a minute, but here I just booted from the CD instead.
In Disk Utility (whether you're doing this on the machine from the Tiger installer or on another machine over FireWire), wipe the mini's current partition scheme and create two new partitions. The first is your HFS+ volume for booting. This particular machine will only run NetBSD, so I made it 512MB to have enough room for multiple kernels and for other files I might need, but if you want a dual-boot system you can make this larger. The second partition will be for NetBSD; I allocated everything else and created it as a UFS ("UNIX File System") partition, though we will divvy it up later. The formatting scheme should look more or less like these screenshots. Shut down the mini when you're done.
Now we boot the NetBSD installer. Bring up the machine in OpenFirmware mode -- all New World Macs use OpenFirmware 3 -- by holding down Command-Option-O-F while powering it on (I advise doing this from a directly-attached USB keyboard). This will bring up the OpenFirmware interface. When you are commanded to do so, release the keys and you will drop to the famous ok prompt. If you're lucky and the evil spirits in your optical drive have been placated by an offering of peanut M&Ms and a young maiden at midnight, you can simply insert the NetBSD install disc and type
boot cd:,\ofwboot.xcf netbsd.macppc
Note the backslash, not a forward slash! If this works properly, then the screen will go black (you don't go back) and enter the Installer proper.
If you get weird errors or OpenFirmware complains the disc is not readable, the optical drive is probably whacked. My drive wouldn't read burned Fujifilm CD-R media (that everything else did), but would read burned Maxell media. If you can't even control for that, you may be able to connect a FireWire CD/DVD reader and boot from it instead. The command would be "something like"
boot fw/node/sbp-2/disk:,\ofwboot.xcf netbsd.macppc
If this didn't work, you may need to snoop around the OpenFirmware device tree to figure out where the device is actually attached, though this should basically work for the G4 mini's single port. Alternatively, you could also try a USB CD-ROM drive, or dding the install image to a USB drive on another computer and booting the mini from that, but the boot string will vary based on which port you connect it to (use dev usb0 and ls to show everything under that port, then dev usb1, etc.). Make sure it is directly connected to the mini. Once you find a device that shows a disk, then "something like" this will work (let's say it was found under usb1):
boot usb1/disk:,\ofwboot.xcf netbsd.macppc
If even that won't work, there are some other ways like netbooting, but this is rather more complicated and I won't talk about it here. Or you could actually fix the drive, I guess ...
When the Installer starts up, choose the option to drop to the shell when it is offered. We will now finish the partitioning from the NetBSD side; we do not use the Installer's built-in partition tool as it will run out of memory. At the shell prompt, type
pdisk /dev/wd0c
When it asks you for a command, type a capital letter P and press RETURN. This will print out the current partition map, which if your mini is similar to mine, should show 4 partitions: the Apple partition map itself, followed by the HFS+ partition, and then by a tiny Apple_Boot partition that is made whenever a UFS volume appears to be the boot volume. (Silly Mac OS X.) You can remove it if you want, but this seemed like more trouble than it was worth for a measly 8.5 megabytes. After that is the space for NetBSD. On my G4 mini, this was partition number 4. Delete this partition by typing a lower-case d, press RETURN, and type 4. Be sure of this number! I will use it in the examples below.
First we will formally create the swap. This is done with the capital letter C command (as shown in the screenshot). Indicate the first block is 4p (i.e., starting at partition 4), for 4194304 blocks (2GB), type Apple_UNIX_SVR2 (don't forget the underscores!), and slice b.
Next is the actual NetBSD root: capital letter C, then saying the first block was 5p (i.e., starting at partition 5, the unallocated section), soaking up the rest of the blocks (however many you see listed under Apple_Free), type Apple_UNIX_SVR2 (don't forget the underscores!), and slice a.
If you did all this right, your screen should look more or less like this:
Verify the partition map one more time with the capital letter P command, then write it out with lower-case w, answering y(es), and then quit with lower-case q. At the shell prompt, return to the installer by typing sysinst and when asked, indicate you will "Use existing partition sizes." The installer will then install the appropriate packages and you can do the initial setup for your clock, the root password, etc. When this is all done, reboot your mini with the left mouse button held down; it will eject the CD (and fail to find a boot volume if you do not have an OS X installation). Shut down the mini.
Before the mini will boot NetBSD, we must copy the kernel and the bootloader to the HFS+ partition. This is where Target Disk Mode comes in handy, because you can just copy directly. Here is my iBook G4 copying a custom kernel (more in a moment):
On the iBook G4, I put in the NetBSD install CD and copied off ofwboot.xcf and netbsd-GENERIC.gz, or you can download them from here and here. They should be copied to the root of the mini's HFS+ volume for the command below to work. For good measure I also uncompressed the gzipped kernel as a failsafe and put a copy of the installation kernel there too, though this isn't necessary. Once the files are copied, eject the mini's drive on the FireWire machine, unplug the FireWire and power the mini off.
If you don't have another Mac around that can talk to the mini over FireWire, you can do this from NetBSD itself, but it's a bit more involved.
Either way, re-enter OpenFirmware with Cmd-Opt-O-F while powering it back up. It's time to boot your new NetBSD machine.
You can see from the screenshot here that the HFS+ volume is considered partition 2, as we left it in pdisk. That means your boot string is
boot hd:,\ofwboot.xcf hd:2/netbsd-GENERIC.gz
Yes, the path to ofwboot still has a backslash, but the argument to ofwboot actually needs a forward slash. NetBSD will start immediately.
There are several minor and one rather obnoxious bug with NetBSD's current support. You will notice a few strange messages on startup as part of the huge mass of text:
oea_startup: failed to allocate DEAD ZONE: error=12
pmu0: power-mgt not configured
pmu0: pmu-pwm-fans not configured
WARNING: 3 errors while detecting hardware; check system log.
bwi0: firmware_open failed on v3/ucode5.fw
I don't know what the first three are, but they appear to be harmless, and appear in many otherwise working dmesg archives (see also this report). The summary WARNING thus can also be politely ignored.
However, the last message is rather obnoxious. Per Sevan the built-in Broadcom WiFi in the Mac mini (detected as bwi0) doesn't work right in NetBSD with more than 512MB of memory, which I refuse to downgrade to, and NetBSD doesn't come with the firmware anyway. Even if you copy it off some other system that does, you won't be able to bring the interface up in the configuration here (you'll just see weird errors about wrong firmware version, etc.).
Since this machine is a bridge and sometimes needs to connect to a test WiFi, I went with a USB WiFi dongle instead (I also use a USB dongle when bridging Ethernet to Ethernet, but pretty much any Ethernet-USB dongle will work too). The one I had on the shelf that I'd bought for something else and then forgot about was a Belkin Wireless G. They sell a number of chipsets under this name, but the model F5D7050 I have here is based on a Ralink RT2501USB chipset that NetBSD sees as rum0, and works fine with wpa_supplicant.
Last but not least was making it "failsafe," with a solid power supply and making it autostarting. Although the G4 mini came with an 85W power supply, I stole the 110W from my 2007 Intel mini and used that so it wouldn't run anywhere near the PSU's capacity and hopefully lengthen its lifetime. As it turns out, this may not have been a problem anyway; most of the time this system is using just 21W on the Kill-A-Watt, maybe 40ish when it's booting.
To autostart NetBSD, ordinarily you would go into OpenFirmware and set boot-device to the bootloader and boot-file to the kernel, as the picture below shows.
However, you'll end up with a black screen or at minimum no console at all on an OpenFirmware 3 system if that's all you do. The magic sauce is to emit some text to the screen before loading the bootloader. Thus, the OpenFirmware settings are (each setenv command is one line):
setenv auto-boot? true
setenv boot-device hd:,\ofwboot.xcf
setenv boot-file hd:2/netbsd-GENERIC.gz (note that I used a different kernel in the screenshot: more in a second)
setenv boot-command ." hi there" cr " screen" output boot
reset-all
The boot-command spacing is especially critical. There is a space after the ." and the quote mark before screen" and after cr is also separated by spaces. The reset-all just tells OpenFirmware to write those settings to NVRAM. If you zap the mini's PRAM with Command-Option-P-R later, you may need to re-enter these.
In this configuration your mini will now start NetBSD automatically when it's turned on (just hold down Command-Option-O-F when starting it up to abort to OpenFirmware). However, this won't bring the machine up automatically after a power failure. While FreeBSD allows starting up after a power failure, this code apparently never made it over to NetBSD. Happily, supporting it merely requires a relatively simple kernel hack. Based on the FreeBSD pmu(4) driver, I created a patch that will automatically reboot any PMU-based NetBSD Power Mac after a power failure.
You should be comfortable with compiling your own kernels in NetBSD; not only is it just good to do for auditing purposes, but you can slim the kernel down substantially or enable other less common features. It's especially easy for NetBSD because all the tools to build it come with a standard installation. All you need to do is download the source and run the build process.
To use this patch, download the source to your home directory on the NetBSD box (you want syssrc.tgz) and download the patch and have it in your home directory as pmu.diff. If you don't have a working curl on your install yet (pkg_add curl, pkg_add mozilla-rootcerts, mozilla-rootcerts install), you may want to download it somewhere else and use scp, sftp or ftp to retrieve it. Then, adjusting as necessary for username and path,
cd /
tar zxf ~/syssrc.tgz
cd /usr/src/sys/arch/macppc/dev
patch -p0 < ~/pmu.diff
Then follow the instructions to make the kernel. I have a pre-built one of 8.1-GENERIC (I call it POWERON) on the gopher server, but you should really roll your own so that you get security fixes, since I may only maintain that kernel intermittently. That build is the one I'm using on the machine currently and on the screenshot above. With this custom kernel installed, when the power is abruptly cut while the machine is powered up it will automatically reboot when power is reapplied, just as the analogous option does in Mac OS X. Copy it to the HFS+ partition and remember to change boot-file to point to it once you've confirmed it works.
Overall, I think the G4 mini makes a fine little server. I wouldn't use it as a client except in Mac OS X itself, and I am forced to admit that even that is becoming less practical these days. But as a little machine to do important back-office tasks and do so reliably, I think NetBSD on the mini is a good choice. Once all the kinks with the installation got ironed out, so far it's been solid and performant especially considering this machine is about 13 years old (though I'm happy with its performance even on thirty-year-old machines). Rather than buying something new, if your needs are small it's probable you've got some old machine around that could do those tasks instead of oozing toxins from its circuit board into a waste dump in Rwanda. And since I had two on the shelf, it has an instant spare. I'll probably be usefully running it for as long as I've run my other NetBSD systems, and that's the highest compliment I think I can pay it.
http://tenfourfox.blogspot.com/2019/06/and-now-for-something-completely.html
|
Hacks.Mozilla.Org: GeckoView in 2019 |
Last September we wrote about using GeckoView to bring Firefox’s rendering engine to Android as a reusable library. By decoupling the Gecko engine from the Firefox application, we’ve created a newer, faster, and more maintainable way to create Android applications. This approach leverages Gecko’s excellent performance, privacy, and support for cutting-edge web standards.
With today’s release of our GeckoView-powered Firefox Preview, we’d like to share an update on what we’ve accomplished and where GeckoView is going in 2019.
We’re excited to announce today’s initial release of Firefox Preview (GitHub), an entire browser built from the ground up with GeckoView and Mozilla Android Components at its core. Though still an early preview, this is our first end-user product built completely with these new technologies.
Firefox Preview is our platform for building, testing, and delivering unique features. We’ll use it to explore new concepts for how mobile browsers should look and feel. We encourage you to give it a try!
But that is not all — Mozilla is using GeckoView in many other products as well:
To date, Firefox Focus has been our most prominent consumer of GeckoView. Focus’s simplicity lends itself to experimentation. Currently we’re using Focus to split test between GeckoView and Android’s built-in WebView. This helps us ensure that GeckoView’s performance and stability meet or exceed expectations set by Android’s platform libraries.
While Focus is great at what it does, it is not a general purpose browser. By design, Focus does not keep track of history or bookmarks, nor does it support APIs like WebRTC. Yet we need a place to test those features to ensure that GeckoView is sufficiently robust and capable of building fully-featured browsers. That’s where Reference Browser comes in.
Like Firefox Preview, Reference Browser is a full browser built with GeckoView and Mozilla Android Components, but crucially, it is not an end-user product. Its intended audience is browser developers. Indeed, Reference Browser is a proving ground where we validate that GeckoView and the Components fit together and work as expected. We gain the ability to develop our core libraries without the constraints of an in-market product.
GeckoView also powers Firefox Reality, a browser designed for standalone virtual reality headsets. In addition to leveraging Gecko’s excellent support for immersive web technologies, Firefox Reality demonstrates GeckoView’s versatility. The same library that’s at the heart of “traditional” browsers like Focus and Firefox Preview can also power experiences in an entirely different medium.
Lastly, while Firefox for Android (“Fennec”) does not use GeckoView for normal browsing, it does use it to support Progressive Web Apps and Custom Tabs. Moreover, because GeckoView and Fennec are both based on Gecko, they jointly benefit from improvements to that common infrastructure.
GeckoView is the foundation of Mozilla’s next generation of mobile products. To better support that future, we’ve halted new feature development on Focus while we concentrate on refining GeckoView and prepare for the launch of Firefox Preview. If you’re interested in supporting Focus in the future, please help by filling out this survey.
Aside from product development, the past six months have seen many improvements to GeckoView’s internals, especially around compiler-level optimizations and support for additional CPU architectures. Highlights include:
In addition to meeting Google’s upcoming requirements for listing in the Play Store, supporting 64-bit architectures further improves GeckoView’s stability (fewer out-of-memory crashes) and security.
For upcoming releases, we’re working on support for web push and “add to home screen,” among other things.
GeckoView isn’t just for Mozilla, we want it to be useful to you.
Thanks to Emily Toop for new work on the GeckoView Documentation website. It’s easier than ever to get started, either as an app developer using GeckoView or as a GeckoView contributor. If you spot something that could be better documented, pull requests are always welcome.
We’d also love to help you directly. If you need assistance with anything GeckoView-related, you can find us:
Please do reach out with any questions or comments.
The post GeckoView in 2019 appeared first on Mozilla Hacks - the Web developer blog.
|