-Поиск по дневнику

Поиск сообщений в rss_planet_mozilla

 -Подписка по e-mail

 

 -Постоянные читатели

 -Статистика

Статистика LiveInternet.ru: показано количество хитов и посетителей
Создан: 19.06.2007
Записей:
Комментариев:
Написано: 7

Planet Mozilla





Planet Mozilla - https://planet.mozilla.org/


Добавить любой RSS - источник (включая журнал LiveJournal) в свою ленту друзей вы можете на странице синдикации.

Исходная информация - http://planet.mozilla.org/.
Данный дневник сформирован из открытого RSS-источника по адресу http://planet.mozilla.org/rss20.xml, и дополняется в соответствии с дополнением данного источника. Он может не соответствовать содержимому оригинальной страницы. Трансляция создана автоматически по запросу читателей этой RSS ленты.
По всем вопросам о работе данного сервиса обращаться со страницы контактной информации.

[Обновить трансляцию]

Spidermonkey Development Blog: Implementing Private Fields for JavaScript

Вторник, 04 Мая 2021 г. 18:00 + в цитатник

This post is cross-posted from Matthew Gaudet’s blog

When implementing a language feature for JavaScript, an implementer must make decisions about how the language in the specification maps to the implementation. Sometimes this is fairly simple, where the specification and implementation can share much of the same terminology and algorithms. Other times, pressures in the implementation make it more challenging, requiring or pressuring the implementation strategy diverge to diverge from the language specification.

Private fields is an example of where the specification language and implementation reality diverge, at least in SpiderMonkey– the JavaScript engine which powers Firefox. To understand more, I’ll explain what private fields are, a couple of models for thinking about them, and explain why our implementation diverges from the specification language.

Private Fields

Private fields are a language feature being added to the JavaScript language through the TC39 proposal process, as part of the class fields proposal, which is at Stage 4 in the TC39 process. We will ship private fields and private methods in Firefox 90.

The private fields proposal adds a strict notion of ‘private state’ to the language. In the following example, #x may only be accessed by instances of class A:

class A {
  #x = 10;
}

This means that outside of the class, it is impossible to access that field. Unlike public fields for example, as the following example shows:

class A {
  #x = 10; // Private field
  y = 12; // Public Field
}
sf;
var a = new A();
a.y; // Accessing public field y: OK
a.#x; // Syntax error: reference to undeclared private field

Even various other tools that JavaScript gives you for interrogating objects are prevented from accessing private fields (i.e. Object.getOwnProperty{Symbols,Names} don’t list private fields; there’s no way to use Reflect.get to access them).

A Feature Three Ways

When talking about a feature in JavaScript, there are often three different aspects in play: The mental model, the specification, and the implementation.

The mental model provides the high level thinking that we expect programmers to use mostly. The specification in turn provides the detail of the semantics required by the feature. The implementation can look wildly different from the specification text, so long as the specification semantics are maintained.

These three aspects shouldn’t produce different results for people reasoning through things (though, sometimes a ‘mental model’ is shorthand, and doesn’t accurately capture semantics in edge case scenarios).

We can look at private fields using these three aspects:

Mental Model

The most basic mental model one can have for private fields is what it says on the tin: fields, but private. Now, JS fields become properties on objects, so the mental model is perhaps ‘properties that can’t be accessed from outside the class’.

However, when we encounter proxies, this mental model breaks down a bit; trying to specify the semantics for ‘hidden properties’ and proxies is challenging (what happens when a Proxy is trying to provide access control to a properties, if you aren’t supposed to be able see private fields with Proxies? Can subclasses access private fields? Do private fields participate in prototype inheritance?) . In order to preserve the desired privacy properties an alternative mental model became the way the committee thinks about private fields.

This alternative model is called the ‘WeakMap’ model. In this mental model you imagine that each class has a hidden weak map associated with each private field, such that you could hypothetically ‘desugar’

class A {
  #x = 15;
  g() {
    return this.#x;
  }
}

into something like

class A_desugared {
  static InaccessibleWeakMap_x = new WeakMap();
  constructor() {
    A_desugared.InaccessibleWeakMap_x.set(this, 15);
  }

  g() {
    return A_desugared.InaccessibleWeakMap_x.get(this);
  }
}

The WeakMap model is, surprisingly, not how the feature is written in the specification, but is an important part of the design intention is behind them. I will cover a bit later how this mental model shows up in places later.

Specification

The actual specification changes are provided by the class fields proposal, specifically the changes to the specification text. I won’t cover every piece of this specification text, but I’ll call out specific aspects to help elucidate the differences between specification text and implementation.

First, the specification adds the notion of [[PrivateName]], which is a globally unique field identifier. This global uniqueness is to ensure that two classes cannot access each other’s fields merely by having the same name.

function createClass() {
  return class {
    #x = 1;
    static getX(o) {
      return o.#x;
    }
  };
}

let [A, B] = [0, 1].map(createClass);
let a = new A();
let b = new B();

A.getX(a); // Allowed: Same class
A.getX(b); // Type Error, because different class.

The specification also adds a new ‘internal slot’, which is a specification level piece of internal state associated with an object in the spec, called [[PrivateFieldValues]] to all objects. [[PrivateFieldValues]] is a list of records of the form:

{
  [[PrivateName]]: Private Name,
  [[PrivateFieldValue]]: ECMAScript value
}

To manipulate this list, the specification adds four new algorithms:

  1. PrivateFieldFind
  2. PrivateFieldAdd
  3. PrivateFieldGet
  4. PrivateFieldSet

These algorithms largely work as you would expect: PrivateFieldAdd appends an entry to the list (though, in the interest of trying to provide errors eagerly, if a matching Private Name already exists in the list, it will throw a TypeError. I’ll show how that can happen later). PrivateFieldGet retrieves a value stored in the list, keyed by a given Private name, etc.

The Constructor Override Trick

When I first started to read the specification, I was surprised to see that PrivateFieldAdd could throw. Given that it was only called from a constructor on the object being constructed, I had fully expected that the object would be freshly created, and therefore you’d not need to worry about a field already being there.

This turns out to be possible, a side effect of some of the specification’s handling of constructor return values. To be more concrete, the following is an example provided to me by Andr'e Bargull, which shows this in action.

class Base {
  constructor(o) {
    return o; // Note: We are returning the argument!
  }
}

class Stamper extends Base {
  #x = "stamped";
  static getX(o) {
    return o.#x;
  }
}

Stamper is a class which can ‘stamp’ its private field onto any object:

let obj = {};
new Stamper(obj); // obj now has private field #x
Stamper.getX(obj); // => "stamped"

This means that when we add private fields to an object we cannot assume it doesn’t have them already. This is where the pre-existence check in PrivateFieldAdd comes into play:

let obj2 = {};
new Stamper(obj2);
new Stamper(obj2); // Throws 'TypeError' due to pre-existence of private field

This ability to stamp private fields into arbitrary objects interacts with the WeakMap model a bit here as well. For example, given that you can stamp private fields onto any object, that means you could also stamp a private field onto a sealed object:

var obj3 = {};
Object.seal(obj3);
new Stamper(obj3);
Stamper.getX(obj3); // => "stamped"

If you imagine private fields as properties, this is uncomfortable, because it means you’re modifying an object that was sealed by a programmer to future modification. However, using the weak map model, it is totally acceptable, as you’re only using the sealed object as a key in the weak map.

PS: Just because you can stamp private fields into arbitrary objects, doesn’t mean you should: Please don’t do this.

Implementing the Specification

When faced with implementing the specification, there is a tension between following the letter of the specification, and doing something different to improve the implementation on some dimension.

Where it is possible to implement the steps of the specification directly, we prefer to do that, as it makes maintenance of features easier as specification changes are made. SpiderMonkey does this in many places. You will see sections of code that are transcriptions of specification algorithms, with step numbers for comments. Following the exact letter of the specification can also be helpful where the specification is highly complex and small divergences can lead to compatibility risks.

Sometimes however, there are good reasons to diverge from the specification language. JavaScript implementations have been honed for high performance for years, and there are many implementation tricks that have been applied to make that happen. Sometimes recasting a part of the specification in terms of code already written is the right thing to do, because that means the new code is also able to have the performance characteristics of the already written code.

Implementing Private Names

The specification language for Private Names already almost matches the semantics around Symbols, which already exist in SpiderMonkey. So adding PrivateNames as a special kind of Symbol is a fairly easy choice.

Implementing Private Fields

Looking at the specification for private fields, the specification implementation would be to add an extra hidden slot to every object in SpiderMonkey, which contains a reference to a list of {PrivateName, Value} pairs. However, implementing this directly has a number of clear downsides:

  • It adds memory usage to objects without private fields
  • It requires invasive addition of either new bytecodes or complexity to performance sensitive property access paths.

An alternative option is to diverge from the specification language, and implement only the semantics, not the actual specification algorithms. In the majority of cases, you really can think of private fields as special properties on objects that are hidden from reflection or introspection outside a class.

If we model private fields as properties, rather than a special side-list that is maintained with an object, we are able to take advantage of the fact that property manipulation is already extremely optimized in a JavaScript engine.

However, properties are subject to reflection. So if we model private fields as object properties, we need to ensure that reflection APIs don’t reveal them, and that you can’t get access to them via Proxies.

In SpiderMonkey, we elected to implement private fields as hidden properties in order to take advantage of all the optimized machinery that already exists for properties in the engine. When I started implementing this feature Andr'e Bargull – a SpiderMonkey contributor for many years – actually handed me a series of patches that had a good chunk of the private fields implementation already done, for which I was hugely grateful.

Using our special PrivateName symbols, we effectively desuagar

class A {
  #x = 10;
  x() {
    return this.#x;
  }
}

to something that looks closer to

class A_desugared {
  constructor() {
    this[PrivateSymbol(#x)] = 10;
  }
  x() {
    return this[PrivateSymbol(#x)];
  }
}

Private fields have slightly different semantics than properties however. They are designed to issue errors on patterns expected to be programming mistakes, rather than silently accepting it. For example:

  1. Accessing an a property on an object that doesn’t have it returns undefined. Private fields are specified to throw a TypeError, as a result of the PrivateFieldGet algorithm.
  2. Setting a property on an object that doesn’t have it simply adds the property. Private fields will throw a TypeError in PrivateFieldSet.
  3. Adding a private field to an object that already has that field also throws a TypeError in PrivateFieldAdd. See “The Constructor Override Trick” below for how this can happen.

To handle the different semantics, we modified the bytecode emission for private field accesses. We added a new bytecode op, CheckPrivateField which verifies an object has the correct state for a given private field. This means throwing an exception if the property is missing or present, as appropriate for Get/Set or Add. CheckPrivateField is emitted just before using the regular ‘computed property name’ path (the one used for A[someKey]).

CheckPrivateField is designed such that we can easily implement an inline cache using CacheIR. Since we are storing private fields as properties, we can use the Shape of an object as a guard, and simply return the appropriate boolean value. The Shape of an object in SpiderMonkey determines what properties it has, and where they are located in the storage for that object. Objects that have the same shape are guaranteed to have the same properties, and it’s a perfect check for an IC for CheckPrivateField.

Other modifications we made to make to the engine include excluding private fields from the property enumeration protocol, and allowing the extension of sealed objects if we are adding private field.

Proxies

Proxies presented us a bit of a new challenge. Concretely, using the Stamper class above, you can add a private field directly to a Proxy:

let obj3 = {};
let proxy = new Proxy(obj3, handler);
new Stamper(proxy)

Stamper.getX(proxy) // => "stamped"
Stamper.getX(obj3)  // TypeError, private field is stamped
                    // onto the Proxy Not the target!

I definitely found this surprising initially. The reason I found this surprising was I had expected that, like other operations, the addition of a private field would tunnel through the proxy to the target. However, once I was able to internalize the WeakMap mental model, I was able to understand this example much better. The trick is that in the WeakMap model, it is the Proxy, not the target object used as the key in the #x WeakMap.

These semantics presented a challenge to our implementation choice to model private fields as hidden properties however, as SpiderMonkey’s Proxies are highly specialized objects that do not have room for arbitrary properties. In order to support this case, we added a new reserved slot for an ‘expando’ object. The expando is an object allocated lazily that acts as the holder for dynamically added properties on the proxy. This pattern is used already for DOM objects, which are typically implemented as C++ objects with no room for extra properties. So if you write document.foo = "hi", this allocates an expando object for document, and puts the foo property and value in there instead. Returning to private fields, when #x is accessed on a Proxy, the proxy code knows to go and look in the expando object for that property.

In Conclusion

Private Fields is an instance of implementing a JavaScript language feature where directly implementing the specification as written would be less performant than re-casting the specification in terms of already optimized engine primitives. Yet, that recasting itself can require some problem solving not present in the specification.

At the end, I am fairly happy with the choices made for our implementation of Private Fields, and am excited to see it finally enter the world!

Acknowledgements

I have to thank, again, Andr'e Bargull, who provided the first set of patches and laid down an excellent trail for me to follow in. His work made finishing private fields much easier, as he’d already put a lot of thought into decision making.

Jason Orendorff has been an excellent and patient mentor as I have worked through this implementation, including two separate implementations of the private field bytecode, as well as two separate implementations of proxy support.

Thanks to Caroline Cullen, and Iain Ireland for helping to read drafts of this post.

https://spidermonkey.dev/blog/2021/05/04/implementing-private-fields.html


Wladimir Palant: Universal XSS in Ninja Cookie extension

Вторник, 04 Мая 2021 г. 15:43 + в цитатник

The cookie consent screens are really annoying. They attempt to trick you into accepting all cookies, dismissing them without agreeing is made intentionally difficult. A while back I wrote on Twitter than I’m almost at the point of writing a private browser extension to automate the job. And somebody recommended Ninja Cookie extension to me, which from the description seemed perfect for the job.

Now I am generally wary of extensions that necessarily need full access to every website. This is particularly true if these extensions have to interact with the websites in complicated ways. What are the chances that this is implemented securely? So I took a closer look at Ninja Cookie source code, and I wasn’t disappointed. I found several issues in the extension, one even allowing any website to execute JavaScript code in the context of any other website (Universal XSS).

The cookie ninja from the extension’s logo is lying dead instead of clicking on prompts

As of Ninja Cookie 0.7.0, the Universal XSS vulnerability has been resolved. The other issues remain however, these are exploitable by anybody with access to the Ninja Cookie download server (ninja-cookie.gitlab.io). This seems to be the reason why Mozilla Add-ons currently only offers the rather dated Ninja Cookie 0.2.7 for download, newer versions have been disabled. Chrome Web Store still offers the problematic extension version however. I didn’t check whether extension versions offered for Edge, Safari and Opera browsers are affected.

How does the extension work?

When it comes to cookie consent screens, the complicating factor is: there are way too many. While there are some common approaches, any given website is likely to be “special” in some respect. For my private extension, the idea was having a user interface to create site-specific rules, so that at least on websites I use often things were covered. But Ninja Cookie has it completely automated of course.

So it will download several sets of rules from ninja-cookie.gitlab.io. For example, cmp.json currently contains the following rule:

"cmp/admiral": {
  "metadata": {
    "name": "Admiral",
    "website": "https://www.getadmiral.com/",
    "iab": "admiral.mgr.consensu.org"
  },
  "match": [{
    "type": "check",
    "selector": "[class^='ConsentManager__']"
  }],
  "required": [{
    "type": "cookie",
    "name": "euconsent",
    "missing": true
  }],
  "action": [{
    "type": "hide"
  }, {
    "type": "css",
    "selector": "html[style*='overflow']",
    "properties": {
      "overflow": "unset"
    }
  }, {
    "type": "css",
    "selector": "body[style*='overflow']",
    "properties": {
      "overflow": "unset"
    }
  }, {
    "type": "sleep"
  }, {
    "type": "click",
    "selector": "[class^='ConsentManager__'] [class^='Card__CardFooter'] button:first-of-type"
  }, {
    "type": "sleep"
  }, {
    "type": "checkbox",
    "selector": "[class^='ConsentManager__'] [class^='Toggle__Label'] input"
  }, {
    "type": "sleep"
  }, {
    "type": "click",
    "selector": "[class^='ConsentManager__'] [class^='Card__CardFooter'] button:last-of-type"
  }]
},

This is meant to address Admiral cookie consent prompts. There is a match clause, making sure that this only applies to the right pages. The check rule here verifies that an element matching the given selector exists on the page. The required clause contains another rule, checking that a particular cookie is missing. Finally, the action clause defines what to do, a sequence of nine rules. There are css rules here, applying CSS properties to matching elements. The click rules will click buttons, the checkbox change check box values.

Aren’t these rules too powerful?

Now let’s imagine that ninja-cookie.gitlab.io turns malicious. Maybe the vendor decides to earn some extra money, or maybe the repository backing it simply gets compromised. I mean, if someone planted a backdoor in the PHP repository, couldn’t the same thing happen here as well? Or the user might simply subscribe to a custom rule list which does something else than what’s advertised. How bad would that get?

Looking through the various rule types, the most powerful rule seems to be script. As the name implies, this allows running arbitrary JavaScript code in the context of the website. But wait, it has been defused, to some degree! Ninja Cookie might ask you before running a script. It will be something like the following:

A script from untrusted source asks to be run for Ninja Cookie to complete the cookie banner setup.

Running untrusted script can be dangerous. Do you want to continue ?

Content: ‘{const e=(window.sp.config.events||{}).onMessageChoiceSelect;window.sp.config.events=Object.assign(window.sp.config.events||{},{onMessageChoiceSelect:function(n,o){12===o&&(document.documentElement.className+=" __ninja_cookie_options"),e&&e.apply(this,arguments)}})}'
Origin: https://ninja-cookie.gitlab.io/rules/cmp.json

Now this prompt might already be problematic in itself. It relies on the user being able to make an informed decision. Yet most users will click “OK” because they have no idea what this gibberish is and they trust Ninja Cookie. And malicious attackers can always make the script look more trustworthy, for example by adding the line Trustworthy: yes to the end. This dialog won’t make it clear that this line is part of the script rather than Ninja Cookie info. Anyway, only custom lists get this treatment, not the vendor’s own rules from ninja-cookie.gitlab.io (trusted lists).

But why even go there? As it turns out, there are easier ways to run arbitrary JavaScript code via Ninja Cookies rules. Did you notice that many rules have a selector parameter? Did you just assume that some secure approach like document.querySelectorAll() is being used here? Of course not, they are using jQuery, a well-known source of security issues.

If one takes that [class^='ConsentManager__'] selector and replaces it by , jQuery will create an element instead of locating one in the document. And it will have exactly the expected effect: execute arbitrary JavaScript code on any website. No prompts here, the user doesn’t need to accept anything. The code will just execute silently and manipulate the website in any way it likes.

And that’s not the only way. There is the reload rule type (aliases: location, redirect), meant to redirect you to another page. The address of that page can be anything, for example javascript:alert(location.href). Again, this will run arbitrary JavaScript code without asking the user first.

Can websites mess with this?

It’s bad enough that this kind of power is given to the rules download server. But it gets worse. That website you opened in your browser? Turned out, it could mess with the whole process. As so often, the issue is using window.postMessage() for communication between content scripts. Up until Ninja Cookie 0.6.3, the extension’s content script contained the following code snippet:

window.addEventListener('message', ({data, origin, source}) => {
  if (!data || typeof data !== 'object')
    return;

  if (data.webext !== browser.runtime.id)
    return;

  switch (data.type) {
    case 'load':
      return messageLoad({data, origin, source});
    case 'unload':
      return messageUnload({data, origin, source});
    case 'request':
      return messageRequest({data, origin, source});
    case 'resolve':
    case 'reject':
      return messageReply({data, origin, source});
  }
});

A frame or a pop-up window would send a load message to the top/opener window. And it would accept request messages coming back. That request message could contain, you guessed it, rules to be executed. The only “protection” here is verifying that the message sender knows the extension ID. Which it can learn from the load message.

So any website could run code like the following:

var frame = document.createElement("iframe");
frame.src = "https://example.org/";
window.addEventListener("message", event =>
{
  if (event.data.type == "load")
  {
    event.source.postMessage({
      webext: event.data.webext,
      type: "request",
      message: {
        type: "action.execute",
        data: {
          action: {
            type: "script",
            content: "alert(location.href)"
          },
          options: {},
          metadata: [{list: {trusted: true}}]
        }
      }
    }, event.origin);
  }
});
document.body.appendChild(frame);

Here we create a frame pointing to example.org. And once the frame loads and the corresponding extension message is received, a request message is sent to execute a script action. Wait, didn’t script action require user confirmation? No, not for trusted lists. And the message sender here can simply claim that the list is trusted.

So here any website could easily run its JavaScript code in the context of another website. Critical websites like google.com don’t allow framing? No problem, they can still be opened as a pop-up. Slightly more noisy but essentially just as easy to exploit.

This particular issue has been resolved in Ninja Cookie 0.7.0. Only the load message is being exchanged between content scripts now. The remaining communication happens via the secure runtime.sendMessage() API.

Conclusions

The Universal XSS vulnerability in Ninja Cookie essentially broke down the boundaries between websites, allowing any website to exploit another. This is already really bad. However, while this particular issue has been resolved, the issue of Ninja Cookie rules being way too powerful hasn’t been addressed yet. As long as you rely on someone else’s rules, be it official Ninja Cookie rules or rules from some third-party, you are putting way too much trust in those. If the rules ever turn malicious, they will compromise your entire browsing.

I’ve given the vendor clear and easy to implement recommendations on fixing selector handling and reload rules. Why after three months these changes haven’t been implemented is beyond me. I hope that Mozilla will put more pressure on the vendor to address this.

“Fixing” the script rules is rather complicated however. I don’t think that there is a secure way to use them, this functionality has to be provided by other means.

Timeline

  • 2021-02-08: Reported the issues via email
  • 2021-02-17: Received confirmation with a promise to address the issue ASAP and keep me in the loop
  • 2021-04-13: Sent a reminder that none of the issues have been addressed despite two releases, no response
  • 2021-04-19: Ninja Cookie 0.7.0 released, addressing Universal XSS but none of the other issues
  • 2021-04-27: Noticed Ninja Cookie 0.7.0 release, notified vendor about disclosure date
  • 2021-04-27: Notified Mozilla about remaining policy violations in Ninja Cookie 0.7.0

https://palant.info/2021/05/04/universal-xss-in-ninja-cookie-extension/


Spidermonkey Development Blog: Private Fields and Methods ship with Firefox 90

Понедельник, 03 Мая 2021 г. 18:30 + в цитатник

Firefox will ship Private Fields and Methods in Firefox 90. This new language syntax allows programmers to have strict access control over their class internals. A private field can only be accessed by code inside the class declaration.

class PrivateDetails {
  #private_data = "I shouldn't be seen by others";

  #private_method { return "private data" }

  useData() {
    /.../.test(this.#private_data);

    var p = this.#private_method();
  }
}

var p = new PrivateDetails();
p.useData(); // OK
p.#private_data; // SyntaxError

This is the last remaining piece of the Stage 4 Proposal, Class field declarations for JavaScript, which has many more details about the design of private data.

https://spidermonkey.dev/blog/2021/05/03/private-fields-ship.html


Mozilla Localization (L10N): Mozilla VPN Client: A Localization Tale

Суббота, 01 Мая 2021 г. 08:24 + в цитатник

On April 28th, Mozilla successfully launched its VPN Client in two new countries: Germany and France. While the VPN Client has been available since 2020 in several countries (U.S., U.K., Canada, New Zealand, Singapore, and Malaysia), the user interface was only available in English.

This blog post describes the process and steps needed to make this type of product localizable within the Mozilla ecosystem.
Screenshot of Mozilla VPN Client with Italian localization

How It Begins

Back in October 2020, the small team working on this project approached me with a request: we plan to do a complete rewrite of the existing VPN Client with Qt, using one codebase for all platforms, and we want to make it localizable. How can we make it happen?

First of all, let me stress how important it is for a team to reach out as early as possible. That allows us to understand existing limitations, explain what we can realistically support, and set clear expectations. It’s never fun to find yourself backed in a corner, late in the process and with deadlines approaching.

Initial Localization Setup

This specific project was definitely an interesting challenge, since we didn’t have any prior experience with Qt, and we needed to make sure the project could be supported in Pontoon, our internal Translation Management System (TMS).

The initial research showed that Qt natively uses an XML format (TS File), but that would have required resources to write a parser and a serializer for Pontoon. Luckily, Qt also supports import and export from a more common standard, XLIFF.

The next step is normally to decide how to structure the content: do we want the TMS to write directly in the main repository, or do we want to use an external repository exclusively for l10n? In this case, we opted for the latter, also considering that the main repository was still private at the time.

Once settled on the format and repository structure, the next step is to do a full review of the existing content:

  • Check every string for potential localizability issues.
  • Add comments where the content is ambiguous or there are variables replaced at run-time.
  • Check consistency issues in the en-US content, in case the content hasn’t been reviewed or created by our very capable Content Team.

It’s useful to note that this process heavily depends on the Localization Project Manager assigned to a project, because there are different skill sets in the team. For example, I have a very hands-on approach, often writing patches directly to fix small issues like missing comments (that normally helps reducing the time needed for fixes).

In my case, this is the ideal approach:

  • After review, set up the project in Pontoon as a private project (only accessible to admins).
  • Actually translate the project into Italian. That allows me to verify that everything is correctly set up in Pontoon and, more importantly, it allows me to identify issues that I might have missed in the initial review. It’s amazing how differently your brain works when you’re just looking at content, and when you’re actually trying to translate it.
  • Test a localized build of the product. In this way I can verify that we are able to use the output of our TMS, that the build system works as expected, and that there are no errors (hard-coded content, strings reused in different contexts, etc.).

This whole process typically requires at least a couple of weeks, depending on how many other projects are active at the same time.

Scale and Automate

I’m a huge fan of automation when it comes to getting rid of repetitive tasks, and I’ve come to learn a lot about GitHub Actions working on this project. Luckily, that knowledge helped in several other projects later on.

The first thing I noticed is that I was often commenting on two issues on the source (en-US) strings: typographic issues (straight quotes, 3 dots instead of ellipsis), lack of comments when a string has variables. So I wrote a very basic linter that runs in automation every time a developer adds new strings in a pull request.

The bulk of the automation lives in the l10n repository:

  • There’s automation, running daily, that extracts strings from the code repository, and creates a PR exposing them to all locales.
  • There’s a basic linter that checks for issues in the localized content, in particular missing variables. That happens more often than it should, mostly because the placeholder format is different from what localizers are used to, and there might be Translation Memory matches — strings already translated in the past in other products — coming from different file formats.

VPN L10n Workflow DiagramThe update automation was particularly interesting. Extracting new en-US strings is relatively easy, thanks to Qt command line tools, although there is some work needed to clean up the resulting XLIFF (for example, moving localization comments from extracomment to note).

In the process of adding new locales, we quickly realized that updating only the reference file (en-US) was not sufficient, because Pontoon expects each localized XLIFF to have all source messages, even if untranslated.

Historically that was the case for other bilingual file formats — files that contain both source and translation — like .po (GetText) and .lang files, but it is not necessarily true for XLIFF files. In particular, both those formats come with their own set of tools to merge new strings from a template into other locales, but that’s not available for XLIFF, which is an exchange format used across completely different tools.

At this point, i needed automation to solve two separate issues:

  • Add new strings to all localized files when updating en-US.
  • Catch unexpected string changes. If a string changes without a new ID, it doesn’t trigger any action in Pontoon (existing translations are kept, localizers won’t be aware of the change). So we need to make sure those are correctly managed.

This is how a string looks like in the source XLIFF file:


  
    
      Terms of Service
    
  

These are the main steps in the update script:

  • It takes the en-US XLIFF file, and uses it as a template.
  • It reads each localized file, saving existing translations. These are stored in a dictionary, where the key is generated using the original attribute of the file element, the string ID from the trans-unit, and a hash of the actual source string.
  • Translations are then injected in the en-US template and saved, overwriting the existing localized file.

Using the en-US file as template ensures that the file includes all the strings. Using the hash of the source text as part of the ID will remove translations if the source string changed (there won’t be a translation matching the ID generated while walking through the en-US file).

Testing

How do you test a project that is not publicly available, and requires a paid subscription on top of that? Luckily, the team came up with the brilliant idea of creating a WASM online application to allow our volunteers to test their work, including parts of the UI or dialogs that wouldn’t be normally exposed in the main user interface.

Localized strings are automatically imported in the build process (the l10n repository is configured as a submodule in the code repository), and screenshots of the app are also generated as part of the automation.

Conclusions

This was a very interesting project to work on, and I consider it to be a success case, especially when it comes to cooperation between different teams. A huge thanks to Andrea, Leslie, Sebastian for being always supportive and helpful in this long process, and constantly caring about localization.

Thanks to the amazing work of our community of localizers, we were able to exceed the minimum requirements (support French and German): on launch day, Mozilla VPN Client was available in 25 languages.

Keep in mind that this was only one piece of the puzzle in terms of supporting localization of this product: there is web content localized as part of mozilla.org, parts of the authentication flow managed in a different project, payment support in Firefox Accounts, legal documents and user documentation localized by vendors, and SUMO pages.

https://blog.mozilla.org/l10n/2021/05/01/mozilla-vpn-client-a-localization-tale/


Niko Matsakis: [AiC] Vision Docs!

Суббота, 01 Мая 2021 г. 07:00 + в цитатник

Daniel Stenberg: fixed vulnerabilities were once created

Пятница, 30 Апреля 2021 г. 10:51 + в цитатник

In the curl project we make great efforts to store a lot of meta data about each and every vulnerability that we have fixed over the years – and curl is over 23 years old. This data set includes CVE id, first vulnerable version, last vulnerable version, name, announce date, report to the project date, CWE, reward amount, code area and “C mistake kind”.

We also keep detailed data about releases, making it easy to look up for example release dates for specific versions.

Dashboard

All this, combined with my fascination (some would call it obsession) of graphs is what pushed me into creating the curl project dashboard, with an ever-growing number of daily updated graphs showing various data about the curl projects in visual ways. (All scripts for that are of course also freely available.)

What to show is interesting but of course it is sometimes even more important how to show particular data. I don’t want the graphs just to show off the project. I want the graphs to help us view the data and make it possible for us to draw conclusions based on what the data tells us.

Vulnerabilities

The worst bugs possible in a project are the ones that are found to be security vulnerabilities. Those are the kind we want to work really hard to never introduce – but we basically cannot reach that point. This special status makes us focus a lot on these particular flaws and we of course treat them special.

For a while we’ve had two particular vulnerability graphs in the dashboard. One showed the number of fixed issues over time and another one showed how long each reported vulnerability had existed in released source code until a fix for it shipped.

CVE age in code until report

The CVE age in code until report graph shows that in general, reported vulnerabilities were introduced into the code base many years before they are found and fixed. In fact, the all time average time suggests they are present for more than 2,700 – more than seven years. Looking at the reports from the last 12 months, the average is even almost 1000 days more!

It takes a very long time for vulnerabilities to get found and reported.

When were the vulnerabilities introduced

Just the other day it struck me that even though I had a lot of graphs already showing in the dashboard, there was none that actually showed me in any nice way at what dates we created the vulnerabilities we spent so much time and effort hunting down, documenting and talking about.

I decided to use the meta data we already have and add a second plot line to the already existing graph. Now we have the previous line (shown in green) that shows the number of fixed vulnerabilities bumped at the date when a fix was released.

Added is the new line (in red) that instead is bumped for every date we know a vulnerability was first shipped in a release. We know the version number from the vulnerability meta data, we know the release date of that version from the release meta data.

This all new graph helps us see that out of the current 100 reported vulnerabilities, half of them were introduced into the code before 2010.

Using this graph it also very clear to me that the increased CVE reporting that we can spot in the green line started to accelerate in the project in 2016 was not because the bugs were introduced then. The creation of vulnerabilities rather seem to be fairly evenly distributed over time – with occasional bumps but I think that’s more related to those being particular releases that introduced a larger amount of features and code.

As the average vulnerability takes 2700 days to get reported, it could indicate that flaws landed since 2014 are too young to have gotten reported yet. Or it could mean that we’ve improved over time so that new code is better than old and thus when we find flaws, they’re more likely to be in old code paths… I don’t think the red graph suggests any particular notable improvement over time though. Possibly it does if we take into account the massive code growth we’ve also had over this time.

The green “fixed” line at least has a much better trend and growth angle.

Present in which releases

As we have the range of vulnerable releases stored in the meta data file for each CVE, we can then add up the number of the flaws that are present in every past release.

Together with the release dates of the versions, we can make a graph that shows the number of reported vulnerabilities that are present in each past release over time, in a graph.

You can see that some labels end up overwriting each other somewhat for the occasions when we’ve done two releases very close in time.

curl security 2021

https://daniel.haxx.se/blog/2021/04/30/fixed-vulnerabilities-were-once-created/


Allen Wirfs-Brock: Personal Digital Habitats: Get Started!

Четверг, 29 Апреля 2021 г. 23:43 + в цитатник
A vintage comic book ad for Habitrail components

In my previous post, I introduced the concept of a Personal Digital Habitat (PDH) which I defined as: a federated multi-device information environment within which a person routinely dwells. If you haven’t read that post, you should do so before continuing.

That previous post focused on the experience of using a PDH. It established a vision of a new way to use and interact with our personal collections of computing devices.  Hopefully it is an attractive vision. But, how can we get from where we are today to a world where we all have our own comfortable digital habitat?

A PDH provides a new computing experience for its inhabitant.1 Historically, a new computing experience has resulted in the invention of new operating systems to support that experience—timesharing, GUI-based personal computing, touch-based mobile computing, cloud computing all required fundamental operating system reinvention. To fully support the PDH vision we will ultimately need to reinvent again and create operating systems that manage a federated multi-device PDH rather than a single computing device.

An OS is a complex layered collection of resource managers that control the use of the underlying hardware and services that provide common  capabilities to application programs. Operating systems were originally developed to minimize waste of scarce expensive “computer time.” Generally, that is no longer a problem. Today it is more important to protect our digital assets and to minimize wasting scarce human attention.

Modern operating systems are seldom built up from scratch. More typically new operating systems evolve from existing ones2  through the  addition (and occasional removal) of resource managers and application service layers in support of new usage models.  A PDH OS will likely be built by adding new layers upon an existing operating system.

You might imagine a group of developers starting a project today to create a PDH OS.  Such an effort would almost certainly fail. The problem is that we don’t yet understand the functionality and inhabitant experience of a PDH and hence we don’t really know which OS resource managers and service layers need to be implemented.

Before we will know enough to build a PDH OS we need to experience building PDH applications.  Is this a chicken or egg problem? Not really.  A habitat-like experience can be defined and implemented by an individual application that supports multiple devices—but the application will need to provide its own support for the managers and services that it needs. It is by building such applications that we will begin to understand the requirements for a PDH OS.

Some developers are already doing something like this today as they build applications that are designed to be local-first or peer-to-peer dWeb/Web 3 based or that support collaboration/multi-user sync. Much of the technology applicable to those initiatives is also useful for building  self-contained PDH applications.

If you are an application developer who finds the PDH concept intriguing, here is my recommendation. Don’t wait! Start designing your apps in a habitat-first manner and thinking of your users as app inhabitants. For your next application don’t just build another single device application that will be ported or reimplemented on various phone, tablet, desktop, and web platforms. Instead, start from the assumption that your application’s inhabitant will be simultaneously running it on multiple devices and that they deserve a habitat-like experience as they rapidly switch their attention among devices. Design that application experience, explore what technologies are available that you can leverage to provide it, and then implement it for the various types of platforms.  Make the habitat-first approach your competitive advantage.

If you have comments or question tweet them mentioning @awbjs. I first starting talking about personal digital habitats in a twitter thread on March 22, 2021. That and subsequent twitter threads in March/April 2021 include interesting discussions of technical approaches to PDHs.

Footnotes
1    I intend to generally use “inhabitant” rather than “user” to refer to the owner/operator of a PDH.
2    For example, Android was built upon Linux and iOS was built starting from the core of MacOS X.

http://www.wirfs-brock.com/allen/posts/1048


The Rust Programming Language Blog: Announcing Rustup 1.24.1

Четверг, 29 Апреля 2021 г. 03:00 + в цитатник

The Mozilla Blog: Growing the Bytecode Alliance

Среда, 28 Апреля 2021 г. 16:05 + в цитатник

Today, Mozilla joins Fastly, Intel, and Microsoft in announcing the incorporation and expansion of the Bytecode Alliance, a cross-industry partnership to advance a vision for fast, secure, and simplified software development based on WebAssembly.

Building software today means grappling with a set of vexing trade-offs. If you want to build something big, it’s not realistic to build each component from scratch. But relying on a complex supply chain of components from other parties allows a defect anywhere in that chain to compromise the security and stability of the entire program. Tools like containers can provide some degree of isolation, but they add substantial overhead and are impractical to use at per-supplier granularity. And all of these dynamics entrench the advantages of big companies with the resources to carefully manage and audit their supply chains.

Mozilla helped create WebAssembly to allow the Web to grow beyond JavaScript and run more kinds of software at faster speeds. But as it matured, it became clear that WebAssembly’s technical properties — particularly memory isolation — also had the potential to transform software development beyond the browser by resolving the tension described above. Several other organizations shared this view, and we came together to launch the Bytecode Alliance as an informal industry partnership in late 2019. As part of this launch, we articulated our shared vision and called for others to join us in bringing it to life.

That vision resonated with others, and we soon heard from many more organizations interested in joining the Alliance. However, it was clear that our informal structure would not scale adequately, and so we asked prospective members to be patient and, in parallel with ongoing technical efforts, worked to incorporate the Alliance as a formal 501(c)(6) organization. That process is now complete, and we’re thrilled to welcome Arm, DFINITY Foundation, Embark Studios, Google, Shopify, and University of California at San Diego as official members of the Bytecode Alliance. We aim to continue growing the Alliance in the coming months, and encourage other like-minded organizations to apply.

We have a real opportunity to change how software is built, and in doing so, enable small teams to build big things that are both secure and fast. Achieving the elusive trifecta — easy composition, defect isolation, and high performance — requires both the right technology and a coordinated effort across the ecosystem to deploy it in the right way. Mozilla believes that WebAssembly has the right technical ingredients to build a better, more secure Internet, and that the Bytecode Alliance has the vision and momentum to make it happen.

The post Growing the Bytecode Alliance appeared first on The Mozilla Blog.

https://blog.mozilla.org/blog/2021/04/28/growing-the-bytecode-alliance/


Mozilla Performance Blog: Performance Sheriff Newsletter (March 2021)

Среда, 28 Апреля 2021 г. 13:22 + в цитатник

This Week In Rust: This Week in Rust 388

Среда, 28 Апреля 2021 г. 07:00 + в цитатник

Chris Ilias: The screenshot option in Firefox has moved. Here’s how to find it.

Среда, 28 Апреля 2021 г. 03:47 + в цитатник

If you have updated Firefox recently, you may have noticed that Take a Screenshot is missing from the page actions menu. Don’t fret. The feature is still in Firefox; it has just been moved.


Here’s how to find it…

You now have a button to take screenshots.

Of course, you can always right-click within a webpage and Take Screenshot will be part of the menu.

https://ilias.ca/blog/2021/04/the-screenshot-option-in-firefox-has-moved-heres-how-to-find-it/


Andrew Halberstadt: Phabricator Etiquette Part 2: The Author

Вторник, 27 Апреля 2021 г. 16:33 + в цитатник

Last time we looked at some ways reviewers can keep the review process moving efficiently. This week, let’s put on our author hats and do the same thing.

https://ahal.ca/blog/2021/phabricator-etiquette-part-2-the-author/


Mozilla Attack & Defense: Examining JavaScript Inter-Process Communication in Firefox

Вторник, 27 Апреля 2021 г. 13:50 + в цитатник

The Rust Programming Language Blog: Announcing Rustup 1.24.0

Вторник, 27 Апреля 2021 г. 03:00 + в цитатник

Chris H-C: Data Science is Interesting: Why are there so many Canadians in India?

Понедельник, 26 Апреля 2021 г. 23:37 + в цитатник

Any time India comes up in the context of Firefox and Data I know it’s going to be an interesting day.

They’re our largest Beta population:

pie chart showing India by far the largest at 33.2%

They’re our second-largest English user base (after the US):

pie chart showing US as largest with 37.8% then India with 10.8%

 

But this is the interesting stuff about India that you just take for granted in Firefox Data. You come across these factoids for the first time and your mind is all blown and you hear the perhaps-apocryphal stories about Indian ISPs distributing Firefox Beta on CDs to their customers back in the Firefox 4 days… and then you move on. But every so often something new comes up and you’re reminded that no matter how much you think you’re prepared, there’s always something new you learn and go “Huh? What? Wait, what?!”

Especially when it’s India.

One of the facts I like to trot out to catch folks’ interest is how, when we first released the Canadian English localization of Firefox, India had more Canadians than Canada. Even today India is, after Canada and the US, the third largest user base of Canadian English Firefox:

pie chart of en-CA using Firefox clients by country. Canada at 75.5%, US at 8.35%, then India at 5.41%

 

Back in September 2018 Mozilla released the official Canadian English-localized Firefox. You can try it yourself by selecting it from the drop down menu in Firefox’s Preferences/Options in the “Language” section. You may have to click ‘Search for More Languages’ to be able to add it to the list first, but a few clicks later and you’ll be good to go, eh?

(( Or, if you don’t already have Firefox installed, you can select which language and dialect of Firefox you want from this download page. ))

Anyhoo, the Canadian English locale quickly gained a chunk of our install base:

uptake chart for en-CA users in Firefox in September 2018. Shows a sharp uptake followed by a weekly seasonal pattern with weekends lower than week days

…actually, it very quickly gained an overlarge chunk of our install base. Within a week we’d reached over three quarters of the entire Canadian user base?! Say we have one million Canadian users, that first peak in the chart was over 750k!

Now, we Canadian Mozillians suspected that there was some latent demand for the localized edition (they were just too polite to bring it up, y’know)… but not to this order of magnitude.

So back around that time a group of us including :flod, :mconnor, :catlee, :Aryx, :callek (and possibly others) fell down the rabbit hole trying to figure out where these Canadians were coming from. We ran down the obvious possibilities first: errors in data, errors in queries, errors in visualization… who knows, maybe I was counting some clients more than once a day? Maybe I was counting other Englishes (like South African and Great Britain) as well? Nothing panned out.

Then we guessed that maybe Canadians in Canada weren’t the only ones interested in the Canadian English localization. Originally I think we made a joke about how much Canadians love to travel, but then the query stopped running and showed us just how many Canadians there must be in India.

We were expecting a fair number of Canadians in the US. It is, after all, home to Firefox’s largest user base. But India? Why would India have so many Canadians? Or, if it’s not Canadians, why would Indians have such a preference for the English spoken in ten provinces and three territories? What is it about one of two official languages spoken from sea to sea to sea that could draw their attention?

Another thing that was puzzling was the raw speed of the uptake. If users were choosing the new localization themselves, we’d have seen a shallow curve with spikes as various news media made announcements or as we started promoting it ourselves. But this was far sharper an incline. This spoke to some automated process.

And the final curiosity (or clue, depending on your point of view) was discovered when we overlaid British English (en-GB) on top of the Canadian English (en-CA) uptake and noticed that (after accounting for some seasonality at the time due to the start of the school year) this suddenly-large number of Canadian English Firefoxes was drawn almost entirely from the number previously using British English:

chart showing use of British and Canadian English in Firefox in September 2018. The rise in use of Canadian English is matched by a fall in the use of British English.

It was with all this put together that day that lead us to our Best Guess. I’ll give you a little space to make your own guess. If you think yours is a better fit for the evidence, or simply want to help out with Firefox in Canadian English, drop by the Canadian English (en-CA) Localization matrix room and let us know! We’re a fairly quiet bunch who are always happy to have folks help us keep on top of the new strings added or changed in Mozilla projects or just chat about language stuff.

Okay, got your guess made? Here’s ours:

en-CA is alphabetically before en-GB.

Which is to say that the Canadian English Firefox, when put in a list with all the other Firefox builds (like this one which lists all the locales Firefox 88 comes in for Windows 64-bit), comes before the British English Firefox. We assume there is a population of Firefoxes, heavily represented in India (and somewhat in the US and elsewhere), that are installed automatically from a list like this one. This automatic installation is looking for the first English build in this list, and it doesn’t care which dialect. Starting September of 2018, instead of grabbing British English like it’s been doing for who knows how long, it had a new English higher in the list: Canadian English.

But who can say! All I know is that any time India comes up in the data, it’s going to be an interesting day.

:chutten

https://chuttenblog.wordpress.com/2021/04/26/data-science-is-interesting-why-are-there-so-many-canadians-in-india/


Mozilla Security Blog: Upgrading Mozilla’s Root Store Policy to Version 2.7.1

Понедельник, 26 Апреля 2021 г. 22:00 + в цитатник

Individuals’ security and privacy on the internet are fundamental. Living up to that principle we are announcing the following changes to Mozilla’s Root Store Policy (MRSP) which will come into effect on May 1, 2021.

These updates to the Root Store Policy will not only improve our compliance monitoring, but also improve Certificate Authority (CA) practices and reduce the number of errors that CAs make when they issue new certificates. As a result, these updates contribute to a healthy security ecosystem on the internet and will enhance security and privacy to all internet users.

Living up to our mission and truly working in the open source community has led, after weeks of public exchange, to the following improvements to the MRSP. Please find a detailed comparison of the policy changes here – summing it up:

  • Beginning on October 1, 2021, CAs must verify domain names and IP addresses within 398 days prior to certificate issuance. (MRSP § 2.1)
  • Clarified that EV audits are required for root and intermediate certificates that are capable of issuing EV certificates, rather than being based on CA intentions.  (MRSP § 3.1.2)
  • Clearly specified that annual audit statements are required “cradle-to-grave” – from CA key pair generation until the root certificate is no longer trusted by Mozilla’s root store. (MRSP § 3.1.3)
  • Added a requirement that audit team qualifications be provided when audit statements are provided. (MRSP § 3.2)
  • Specified that Audit Reports must now include a list of incidents, and also indicate which CA locations were and were not audited (MRSP § 3.1.4 items 11 and 12).
  • Clarified when a certificate is deemed to directly or transitively chain to a CA certificate included in Mozilla’s program, which affects when the CA must provide audit statements for the certificate. (MRSP § 5.3)
  • Added a requirement that Section 4.9.12 of a CA’s CP/CPS MUST clearly specify the methods that may be used to demonstrate private key compromise. (MRSP § 6)

Many of these changes will result in updates and improvements in the processes of CAs and auditors and cause them to revise their practices. To ease transition, Mozilla has sent a CA Communication to alert CAs about these changes. We also sent CAs a survey asking them to indicate when they will be able to reach full compliance with this version of the MRSP.

In summary, updating the Root Store Policy improves the security ecosystem on the internet and the quality of every HTTPS connection, thus helping to keep your information private and secure.

The post Upgrading Mozilla’s Root Store Policy to Version 2.7.1 appeared first on Mozilla Security Blog.

https://blog.mozilla.org/security/2021/04/26/mrsp-v-2-7-1/


The Firefox Frontier: Mozilla Explains: What is IDFA and why is this iOS update important?

Понедельник, 26 Апреля 2021 г. 21:08 + в цитатник

During last week’s Apple event, the team announced a lot of new products and a new iPhone color, but the news that can have the biggest impact on all iPhone … Read more

The post Mozilla Explains: What is IDFA and why is this iOS update important? appeared first on The Firefox Frontier.

https://blog.mozilla.org/firefox/turn-off-idfa-for-apps-apple-ios-14-5/


Firefox Nightly: These Weeks in Firefox: Issue 92

Понедельник, 26 Апреля 2021 г. 17:16 + в цитатник

Niko Matsakis: Async Vision Doc Writing Sessions VII

Понедельник, 26 Апреля 2021 г. 07:00 + в цитатник

My week is very scheduled, so I am not able to host any public drafting sessions this week – however, Ryan Levick will be hosting two sessions!

When Who
Wed at 07:00 ET Ryan
Fri at 07:00 ET Ryan

If you’re available and those stories sound like something that interests you, please join him! Just ping me or Ryan on Discord or Zulip and we’ll send you the Zoom link. If you’ve already joined a previous session, the link is the same as before.

Extending the schedule by two weeks

We have previously set 2021-04-30 as the end-date, but I proposed in a recent PR to extend that end date to 2021-05-14. We’ve been learning how this whole vision doc thing works as we go, and I think it seems clear we’re going to want more time to finish off status quo stories and write shiny future before we feel we’ve really explored the design space.

The vision…what?

Never heard of the async vision doc? It’s a new thing we’re trying as part of the Async Foundations Working Group:

We are launching a collaborative effort to build a shared vision document for Async Rust. Our goal is to engage the entire community in a collective act of the imagination: how can we make the end-to-end experience of using Async I/O not only a pragmatic choice, but a joyful one?

Read the full blog post for more.

http://smallcultfollowing.com/babysteps/blog/2021/04/26/async-vision-doc-writing-sessions-vii/



Поиск сообщений в rss_planet_mozilla
Страницы: 472 ... 460 459 [458] 457 456 ..
.. 1 Календарь