-Поиск по дневнику

Поиск сообщений в rss_planet_mozilla

 -Подписка по e-mail

 

 -Постоянные читатели

 -Статистика

Статистика LiveInternet.ru: показано количество хитов и посетителей
Создан: 19.06.2007
Записей:
Комментариев:
Написано: 7

Planet Mozilla





Planet Mozilla - https://planet.mozilla.org/


Добавить любой RSS - источник (включая журнал LiveJournal) в свою ленту друзей вы можете на странице синдикации.

Исходная информация - http://planet.mozilla.org/.
Данный дневник сформирован из открытого RSS-источника по адресу http://planet.mozilla.org/rss20.xml, и дополняется в соответствии с дополнением данного источника. Он может не соответствовать содержимому оригинальной страницы. Трансляция создана автоматически по запросу читателей этой RSS ленты.
По всем вопросам о работе данного сервиса обращаться со страницы контактной информации.

[Обновить трансляцию]

Benjamin Kerensa: Remembering Ian Murdock

Четверг, 31 Декабря 2015 г. 09:25 + в цитатник
Ian MurdockPhoto by Yuichi Sakuraba / CC BY

There is clearly great sadness felt in the open source community today after learning of the passing of Ian Murdock who founded the Debian Linux Distribution and was the first Debian Project Leader. Ian is the “ian” in Debian and Deb, his then-girlfriend (Debra Lynn) for those not familiar with the history of the naming of the project.

I was fortunate to meet Ian Murdock some years ago at an early Linux Conference (LinuxWorld) and it was very inspiring to hear him talk about open source and open culture. I feel still today that he was one of the many people who helped shape my own direction and contributions in open source. Ian was very passionate about open source and helped create the bricks (philosophy, vision, governance, practice) that power many open source projects today.

If it were not for Ian, we would not have many of the great Debian forks we have today including the very popular Ubuntu. There is no doubt that the work he did and his contributions to the early days of open source have had an impact across many projects and losing Ian at such a young age is a tragedy.

That said, I think the circumstances around Ian’s death are quite concerning as we have seen the tweets he made. I do hope that if Ian suffered excessive force at the hands of the San Francisco Police Department that justice will eventually be served.

I hope that we can all reflect on the values that Ian championed and the important work that he did and celebrate his contributions, which have had a very large and positive impact on computing.

Thank you Ian!

http://feedproxy.google.com/~r/BenjaminKerensaDotComMozilla/~3/I1zz2Sq1VU8/remembering-ian-murdock


Julien Pag`es: Convert Firefox into Emacs

Среда, 30 Декабря 2015 г. 18:49 + в цитатник

Firefox is a great browser. One of the reasons I really love it is because it is highly configurable: as an Emacs user, I wanted to use emacs key bindings inside Firefox – well, it’s easy to do that. And much more!

Most of the magic for me comes from the awesome keysnail addon. It basically convert Firefox into Emacs, is also highly configurable and have plugins.

For example, I now use C-x and C-x to switch tabs; C-x b to choose a specific tab (using the Tanything plugin) or C-x k to kill a tab. Tabs are now like Emacs buffers! Keysnail support the mark, incremental search (C-s and C-r), specific functions, … Even M-x is implemented, to search and run specific commands!

Also I use the Find As You Type Firefox feature, for links. It’s awesome: I just hit ‘, then start typing some letters in a link title that I want to follow – I can then use C-s or C-r to find next/previous matching links if needed, then I just press Return to follow the link.

I can browse the web more efficiently, I am less using the mouse and I can reuse the same key bindings in Emacs and Firefox! I keep my configuration files on github, feel free to look at it if you’re interested!

Happy browsing!


https://parkouss.wordpress.com/2015/12/30/convert-firefox-into-emacs/


Cameron Kaiser: TenFourFoxBox 1.0

Среда, 30 Декабря 2015 г. 08:09 + в цитатник
For those of you happily using the TenFourFoxBox beta, it's now time for 1.0 as promised.

TenFourFoxBox 1.0 is mostly a feature and bugfix update. You can update your foxboxes in place without losing information; just save the new foxbox over the old one, keeping the app name the same. Aside from various cosmetic tweaks, this release also fixes a problem with sites that try to enumerate navigator.* properties (Chase Bank being the most notorious), since doing so would barf when it got to navigator.mozApps as we didn't have that component loaded. Interestingly, SeaMonkey had this same problem in bug 1094714, which they solved by effectively importing the entire JavaScript webapps framework. That wasn't acceptable here where the whole point is to strip the browser down, so I got around this problem by creating a dummy stub XPCOM component to replace it and splicing that in. Much more lightweight!

In addition, a number of you apparently rename your TenFourFox builds to just TenFourFox.app, so this version will try that filename if the proper machine architecture is not found. If you really can't control for that either, make a copy of the browser as TenFourFoxBoxRunner.app and it will take precedence over any other executable (but remember to keep that copy up to date!).

Finally, there were also several requests for allowing URLs to be sent back or at least copied for the main browser's benefit. I'm weighing my options on how to push the URL through (Launch Services or otherwise), but you can at least now go to the Box menu and copy the current location URL to the clipboard, or right-click any link to do the same, which you can then paste into your browser of choice. Localization is still not yet supported because we're not yet to a string-freeze point, but we're getting close.

Remember to stop all foxboxes before you upgrade them, and make sure TenFourFox is still running before you start them back up if you want to have the main browser running parallel. Developers, look at the Makefile in the Things To Try: For Developers folder for how to automate foxbox creation and upgrades from the command line.

As of this release, TenFourFoxBox has an official home page and download site and you can grab 1.0 from there. Assuming no major issues by the end of this week, we'll start publicly rolling it out, and then I'll start the arduous trek towards TenFourFox 45. Meanwhile, for 38.6, there will be some updates to the font blacklist but so far no major bugs yet.

http://tenfourfox.blogspot.com/2015/12/tenfourfoxbox-10.html


Jan de Mooij: W^X JIT-code enabled in Firefox

Среда, 30 Декабря 2015 г. 00:30 + в цитатник

Back in June, I added an option to SpiderMonkey to enable W^X protection of JIT code. The past weeks I've been working on fixing the remaining performance issues and yesterday I enabled W^X on the Nightly channel, on all platforms. What this means is that each page holding JIT code is either executable or writable, never both at the same time.

Why?

Almost all JITs (including the ones in Firefox until now) allocate memory pages for code with RWX (read-write-execute) permissions. JITs typically need to patch code (for inline caches, for instance) and with writable memory they can do that with no performance overhead. RWX memory introduces some problems though:

  • Security: RWX pages make it easier to exploit certain bugs. As a result, all modern operating systems store code in executable but non-writable memory, and data is usually not executable, see W^X and DEP. RWX JIT-code is an exception to this rule and that makes it an interesting target.
  • Memory corruption: I've seen some memory dumps for crashes in JIT-code that might have been caused by memory corruption elsewhere. All memory corruption bugs are serious, but if it happens for whatever reason, it's much better to crash immediately.

How It Works

With W^X enabled, all JIT-code pages are non-writable by default. When we need to patch JIT-code for some reason, we use a RAII-class, AutoWritableJitCode, to make the page(s) we're interested in writable (RW), using VirtualProtect on Windows and mprotect on other platforms. The destructor then toggles this back from RW to RX when we're done with it.

(As an aside, an alternative to W^X is a dual-mapping scheme: pages are mapped twice, once as RW and once as RX. In 2010, some people wrote patches to implement this for TraceMonkey, but this work never landed. This approach avoids the mprotect overhead, but for this to be safe, the RW mapping should be in a separate process. It's also more complicated and introduces IPC overhead.)

Performance

Last week I fixed implicit interrupt checks to work with W^X, got rid of some unnecessary mprotect calls, and optimized code poisoning to be faster with W^X.

After that, the performance overhead was pretty small on all benchmarks and websites I tested: Kraken and Octane are less than 1% slower with W^X enabled. On (ancient) SunSpider the overhead is bigger, because most tests finish in a few milliseconds, so any compile-time overhead is measurable. Still, it's less than 3% on Windows and Linux. On OS X it's less than 4% because mprotect is slower there.

I think W^X works well in SpiderMonkey for a number of reasons:

  • We run bytecode in the interpreter before Baseline-compiling it. On the web, most functions have less than ~10 calls or loop iterations, so we never JIT those and we don't have any memory protection overhead.
  • The Baseline JIT uses IC stubs for most operations, but we use indirect calls here, so we don't have to make code writable when attaching stubs. Baseline stubs also share code, so only the first time we attach a particular stub we compile code for it. Ion IC stubs do require us to make memory writable, but Ion doesn't use ICs as much as Baseline.
  • For asm.js (and soon WebAssembly!), we do AOT-compilation of the whole module. After compilation, we need only one mprotect call to switch everything from RW to RX. Furthermore, this code is only modified on some slow paths, so there's basically no performance overhead for asm.js/WebAssembly code.

Conclusion

I've enabled W^X protection for all JIT-code in Firefox Nightly. Assuming we don't run into bugs or serious performance issues, this will ship in Firefox 46.

Last but not least, thanks to the OpenBSD and HardenedBSD teams for being brave enough to flip the W^X switch before we did!

http://jandemooij.nl/blog/2015/12/29/wx-jit-code-enabled-in-firefox/


Daniel Pocock: Real-Time Communication in FOSDEM 2016 main track

Вторник, 29 Декабря 2015 г. 22:12 + в цитатник

FOSDEM is nearly here and Real-Time Communications is back with a bang. Whether you are keen on finding the perfect privacy solution, innovative new features or just improving the efficiency of existing telephony, you will find plenty of opportunities at FOSDEM.

Main track

Saturday, 30 January, 17:00 Dave Neary presents How to run a telco on free software. This session is of interest to anybody building or running a telco-like service or any system administrator keen to look at a practical application of cloud computing with OpenStack.

Sunday, 31 January, 10:00 is my own presentation on Free Communications with Free Software. This session looks at the state of free communications, especially open standards like SIP, XMPP and WebRTC and practical solutions like DruCall (for Drupal), Lumicall (for Android) and much more.

Sunday, 31 January, 11:00 Guillaume Roguez and Adrien B'eraud from Savoir-faire Linux present Building a peer-to-peer network for Real-Time Communication. They explain how their Ring solution, based on OpenDHT, can provide a true peer-to-peer solution.

and much, much more....

  • XMPP Summit 19 is on January 28 and 29, the Thursday and Friday before FOSDEM as part of the FOSDEM Fringe.
  • The FOSDEM Beer Night on Friday, 29 January provides a unique opportunity for Real-Time Communication without software
  • The Real-Time Lounge will operate in the K building over both days of FOSDEM, come and meet the developers of your favourite RTC projects
  • The Real-Time dev-room is the successor of the previous XMPP and Telephony dev-rooms. The Real-Time dev-room is in K.3.401 and the schedule will be announced shortly.

Volunteers and sponsors still needed

Please come and join the FreeRTC mailing list to find out more about ways to participate, the Saturday night dinner and other opportunities.

The FOSDEM team is still fundraising. If your company derives benefit from free software and events like FOSDEM, please see the sponsorship pages.

http://danielpocock.com/fosdem-2016-rtc-main-track


Jeff Muizelaar: WebGL2 enabled in Firefox Nightly

Вторник, 29 Декабря 2015 г. 20:35 + в цитатник
A couple of weeks ago we enabled WebGL2 in Nightly. The implementation is still missing some functionality like PBOs, MSRBs and sampler objects, but it seems to work well enough with the WebGL2 content that we've tried.

WebGL2 is based on OpenGL ES 3 and adds occlusion queries, transform feedback, a large amount of texturing functionality and bunch of new capabilities to the shading language including integer operations.

You can test out the implementation here http://toji.github.io/webgl2-particles/. If it says WebGL2 it's working with WebGL2. We look forward to seeing the graphical enhancements enabled by WebGL2 and encourage developers to start trying it out.

http://muizelaar.blogspot.com/2015/12/webgl2-enabled-in-firefox-nightly.html


Mozilla Addons Blog: Add-on Compatibility for Firefox 44

Вторник, 29 Декабря 2015 г. 19:30 + в цитатник

Firefox 44 will be released on January 26th. Here’s the list of changes that went into this version that can affect add-on compatibility. There is more information available in Firefox 44 for Developers, so you should also give it a look.

General

JavaScript

Theme

Signing

  • Firefox 43 is currently enforcing add-on signing, with a preference to override it. Firefox 44 will remove the preference entirely, which means your add-on will need to be signed in order to run in release versions of Firefox. You can read about your options here.

Let me know in the comments if there’s anything missing or incorrect on these lists. If your add-on breaks on Firefox 44, I’d like to know.

The automatic compatibility validation and upgrade for add-ons on AMO will happen in the coming weeks, so keep an eye on your email if you have an add-on listed on our site with its compatibility set to Firefox 43.

https://blog.mozilla.org/addons/2015/12/29/compatibility-for-firefox-44/


Ian Bicking: A Product Journal: CSS Object Model

Вторник, 29 Декабря 2015 г. 09:00 + в цитатник

I’m blogging about the development of a new product in Mozilla, look here for my other posts in this series

And now for something entirely technical!

We’ve had a contributor from Outernet exploring ways of using PageShot for capturing pages for distribution on their network. Outernet satellite-based content distribution network. It’s a neat idea, but one challenge is that it’s very one-way – anyone (given the equipment) can listen in to what the satellites broadcast, but that’s it (at least for the most interesting use cases). Lots of modern websites aren’t setup well for that, so acquiring content can be tricky.

Given that interest I started thinking more about inlining resources. We’ve been hotlinking to resources simply out of laziness. Some things are easy to handle, but CSS is a bit more annoying because of the indirection of @import and yet more relatively URLs. Until I started poking around I had no idea that there is a CSS Object Model!

Given this there is now experimental support for inlining all CSS rules into the captured page in PageShot. The support is still incomplete, and my understanding of everything you can do with CSS is still incomplete. But the code isn’t very hard. One fun thing is that we can test each CSS rule against the page and see if it is needed. Doing this typically allows 80% of rules to be omitted.

Some highlights of what I’ve learned so far:

There’s two interesting objects: CSSStylesheet (which inherits from Stylesheet) and CSSRule.

document.styleSheets: a list of all stylesheets, both remote (), inline, and imported (@import) stylesheets.

styleSheet.href: the URL of the stylesheet (null if it was inline).

styleSheet.cssRules: a list of all the rules in the stylesheet.

cssRule.type: there’s several types of rules. I’ve chosen to ignore everything but STYLE_RULE and MEDIA_RULE out of laziness.

cssRule.cssRules: media rules (like @media (max-width: 600px) {.nav {display: none}}) contain sub-rules (.nav {display: none} in this case).

cssRule.parentRule: points back to a media rule if there is one.

cssRule.parentStyleSheet: points back to the parent stylesheet. There are probably ways of nesting media rules and stylesheets (that can have media attributes) in ways to create compound media rules that I haven’t accounted for.

cssRule.cssText: the text of the rule. This includes both selectors and style, or media queries and all the sub-rules. I just split on { to separate the selector or query. I assume these are representations of the parsed CSS, and so normalized, but I haven’t explored that in detail.

There’s all sorts of ways to create trees of media restrictions and other complexities that I know I haven’t taken account of, but things Mostly Work Anyway.

Here’s an example that makes use of this to create a single inline stylesheet for a page containing only necessary rules (using ES6):

let allRules = [];

// CSS rules, some of which may be media queries, form a kind of tree; this gets
// this puts all the style rules in a flat list
function addRules(sheet) {
  for (let rule of sheet.cssRules) {
    if (rule.type == rule.MEDIA_RULE) {
      addRules(rule);
    } else if (rule.type == rule.STYLE_RULE) {
      allRules.push(rule);
    }
  }
}

// Then we traverse all the stylesheets and grab rules from each:
for (let styleSheet of document.styleSheets) {
  if (styleSheet.media.length && styleSheet.media.indexOf("*") == -1 && styleSheet.media.indexOf("screen") == -1) {
    // This is a stylesheet for some media besides screen
    continue;
  }
  addRules(styleSheet);
}

// Then we collect the rules up again, clustered by media queries (with
// rulesByMedia[""] for no media query)
let rulesByMedia = {};

for (let rule of allRules) {
  let selector = rule.split("{")[0].trim();
  if (! document.querySelector(selector)) {
    // Skip selectors that don't match anything
    continue;
  }
  let mediaType = "";
  if (rule.parentRule && rule.parentRule.type == rule.MEDIA_RULE) {
    mediaType = rule.parentRule.cssText.split("{")[0].trim();
  }
  rulesByMedia[mediaType] = rulesByMedia[mediaType] || [];
  rulesByMedia.push(rule);
}

// Now we can create a new clean stylesheet:
let lines = [];
for (let mediaType in rulesByMedia) {
  if (mediaType) {
    lines.push(mediaType + " {");
  }
  for (let rule of rulesByMedia[mediaType]) {
    let padding = mediaType ? "  " : "";
    lines.push(padding + rule.cssText);
  }
  if (mediaType) {
    lines.push("}");
  }
}

let style = "";

Obviously there could be rules that apply to DOM elements that aren’t present right now but could be created. And I’m sure it’s omitting fonts and animations. But it’s fun to hack around with.

It might be fun to use this hooked up to mutation observers during your testing and find orphaned rules.

http://www.ianbicking.org/blog/2015/12/product-journal-css-object-model.html


Ian Bicking: A Product Journal: Security

Вторник, 29 Декабря 2015 г. 09:00 + в цитатник

I’m blogging about the development of a new product in Mozilla, look here for my other posts in this series

PageShot, the product I’m working on, makes snapshots of the DOM (the live, dynamic web page) as it is rendered in your browser. There are a lot of security issues here. That DOM is intended to be short-lived, to only be shown to the one user, it might have links that are implicitly authenticated. For instance you can imagine a link like https://someothersite.com/delete?id=49&auth=30f83020a839e where the auth key is what gives the user permission to delete that resource; by sharing that link (which is embedded somewhere in the page) I am sharing the ability to delete something. But neither the application developer nor I as the sharer probably realize that. Generally PageShot breaks developer’s expectations, potentially creating a category of security bugs they’d never thought about.

PageShot has a lot of security implications because it tries to subvert URL sharing, where servers mediate all attempts to share outside of screenshots.

Admitting this makes me feel pretty cagey and defensive. I know there are risks, I know it’s hard to get users to understand the impact of their actions, but I want to do this thing anyway because I have a hunch these risks are worth it.

There’s another way to look at it: these are risks, but also challenges. There are many smart people at Mozilla, and of course any smart person could offer improvements. I believe in the potential for unexpected solutions to arise to challenging problems. Solutions that mitigate the security problems while preserving the value of the DOM over pixels. Solutions that help users understand the impact of what they are doing. Some category of solution I haven’t thought of. I suspect being in security can be a bummer because you often end up in the organizational role of saying no, instead of the more fun role of figuring out how to say yes.

The other thing I have to remember: all of these things are work. If PageShot is a product people find value in, then it’s worth doing that work. But we don’t know yet. So I have to figure out a way to sit on my hands, to hopefully project that this is a prototype exploring whether the idea is valuable, not a prototype to explore the implementation. And if it is valuable then the project will need help around security; and if it’s not valuable then we’ll just tear it all down without wasting too much of other people’s time.

http://www.ianbicking.org/blog/2015/12/product-journal-security.html


James Long: Starters and Maintainers

Вторник, 29 Декабря 2015 г. 03:00 + в цитатник

Journal Entry, 12/18/2015 -

“It’s late Friday night, my wife is already asleep, and I finally found time to go through those pull requests on that old project I put up on github last year. My daughter is getting up at 7:30 though, so I better not stay up too late.

9 new issues since I last checked. 2 new pull requests. Hopefully most of the issues can be closed, and the pull requests are trivial. Ugh, nope, these are some significant changes. I’m going to have to think about this and engage (politely) in a long discussion. They also didn’t update the docs, and this is a breaking change, so we’ll have to figure out how to tell everyone to upgrade.

I should set up a better way to be notified of new issues and pull requests on my projects. But who am I kidding, that would just stress me out more. It’s not like I always have time to respond. At least now I can pretend like this doesn’t exist when other things are stressing me out.

Why do I do this to myself? I’ve helped a lot of people with this code, sure, but the maintainence burden that comes with it is crippling. Even with just one tiny project. If it becomes popular (and my personal clearly-hacky, clearly-DON’T-USE blog engine is over 1,000 stars, WTF), there’s just too much to do. It becomes a 10-hour-a-week job.

I hugely admire people who give so much time to OSS projects for free. I can’t believe how much unpaid boring work is going on. It’s really cool that people care so much about helping others and the community.

I care, too, but with a wife and daughter (and second daughter on the way) and an already-intense job it’s impossible to keep up. My job at Mozilla is basically 100% OSS work anyways.

The only reason I can think of why I keep doing this is because I like starting things. Everybody likes starting things. But I like to really make an impact. To contribute. It’s easy to put something up on github and ignore it if nobody uses it. But that’s not making any kind of impact. I want people to look at my stuff, learn from it, maybe even use it. I’ve helped people learn react, react-router, transducers, hygienic macros, animations, and more by writing production-ready libraries.

That comes with baggage though. Once people use your code in production, are you going to maintain it? It’s your right to say no. But most of the time, you want to see your idea grow and evolve. The problem is that it takes a huge amount of work to keep any momentum.

There are two roles for any project: starters and maintainers. People may play both roles in their lives, but for some reason I’ve found that for a single project it’s usually different people. Starters are good at taking a big step in a different direction, and maintainers are good at being dedicated to keeping the code alive.

Another big difference is that usually there is a single starter of a project, and there always ends up being multiple maintainers. This is because supporting a project alone is simply not scalable. It will grow in popularity, and there’s a linear correlation to the number of issues, pull requests, and other various requests. For every Nth level of popularity a new maintainer must be added, ideally an existing heavy user.

Because it’s not scalable to support a project alone, it’s easy for a starter to get caught in a cycle of despair: he/she has all these cool ideas, but as each get released there is a growing amount of noise to distract from future ideas. It’s crucial to either forget about existing projects or find maintainers for them, but the latter is not a quick task usually.

I am definitely a starter. I tend to be interested in a lot of various things, instead of dedicating myself to a few concentrated areas. I’ve maintained libraries for years, but it’s always a huge source of guilt and late Friday nights to catch up on a backlog of issues.

From now on, I’m going to be clear that code I put on github is experimental and I’m not going to respond to issues or pull requests. If I do release a production-ready library, I’ll already have someone in mind to maintain it. I don’t want to have a second job anymore. :)

Here’s to all the maintainers out there. To all the people putting in tireless, thankless work behind-the-scenes to keep code alive, to write documentation, to cut releases, to register domain names, and everything else.

http://tracking.feedpress.it/link/9494/2248569


The Servo Blog: This Weeks In Servo 45

Понедельник, 28 Декабря 2015 г. 23:30 + в цитатник

In the last week, we landed 25 PRs in the Servo organization’s repositories. Given the Christmas holiday, it’s not too surprising that the number is a bit lower than usual!

This week brings us some of the first progress on Stylo, which is an effort to make it possible to use Servo’s style system within Firefox/Gecko.

Notable Additions

  • ecoal added support for running Servo’s WebGL on Linux over EGL instead of GLX
  • bholley landed the first set of Stylo support in Servo

New Contributors

Screenshots

None this week.

Meetings

All were cancelled, given the holiday breaks.

http://blog.servo.org/2015/12/28/twis-45/


The Servo Blog: This Weeks In Servo 45

Понедельник, 28 Декабря 2015 г. 23:30 + в цитатник

In the last week, we landed 25 PRs in the Servo organization’s repositories. Given the Christmas holiday, it’s not too surprising that the number is a bit lower than usual!

This week brings us some of the first progress on Stylo, which is an effort to make it possible to use Servo’s style system within Firefox/Gecko.

Notable Additions

  • ecoal added support for running Servo’s WebGL on Linux over EGL instead of GLX
  • bholley landed the first set of Stylo support in Servo

New Contributors

Screenshots

None this week.

Meetings

All were cancelled, given the holiday breaks.

http://blog.servo.org//servo.github.io/blog.servo.org/2015/12/28/twis-45/


This Week In Rust: This Week in Rust 111

Понедельник, 28 Декабря 2015 г. 08:00 + в цитатник

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

This week's edition was edited by: nasa42, brson, and llogiq.

Updates from Rust Community

News & Blog Posts

http://this-week-in-rust.org/blog/2015/12/28/this-week-in-rust-111/


Karl Dubost: [worklog] Short week - addons and HTTP

Пятница, 25 Декабря 2015 г. 17:56 + в цитатник

Taking Wednesday off. This is a day off in Japan, it seems it's the Emperor's birthday. Interesting day off which fluctuates with generations when a new emperor arrives in "power". Another funny fact:

During the reign of Emperor Hirohito (Showa period, 1926–1989), the Emperor's birthday was observed on 29 April. That date remained a public holiday, posthumously renamed Greenery Day in 1989 and Showa Day in 2007.

It makes you wish that emperors would change more often and accumulate days off in the calendar.

Then there is christmas celebration. Short week. At least a lot of parts of the world are also in slow motion, so not that much mails.

Addons outreach effort

The add-ons team (add-on or addon?) has established a list of the add-ons broken in Firefox with e10s activated. The Webcompat team will help them to reach out the developers and the company. Last week, I created a private mailing-list to manage this work.

  • On my first pass on monday, I checked all the add-ons broken in Firefox list, and only two had really to be contacted: Ghostery and AOL Toolbar. We received during the day positive contacts. The others seem to be already in motion. One has definitely disappeared from the "Internetz Tubes".

WebCompat.com dev

Webcompat.com bugs

  • We have a backlog of bugs which have not been dealt with for a long time. Serious triage needed there. As usual, you can help. We will not be able to solve everything ourselves. So participation is welcome.
  • No webcompat meeting this week. That will save me the 11pm-midnight late call.

Developer Tools

Misc.

  • Interesting read about open source and design. On the webcompat.com project, we had basically two persons working on the design: Alexa Roman who did an amazing job at defining the interactions and the design, and Guillaume D'em'esy at implementing the design and also suggesting items for the design. Without these two contributors, the project would have been in a less than good shape.

http://www.otsukare.info/2015/12/25/worklog-addons


Christian Heilmann: Detecting AdBlock without an extra HTTP overhead

Пятница, 25 Декабря 2015 г. 13:27 + в цитатник

The other day Cats who code had a blog post about detecting AdBlock, where the main trick is to try to load a JavaScript document with the name adframe.js:

<script type="text/javascript">
var adblock = true;
span>script>

<script type="text/javascript">
if(adblock) {
  //adblock is installed
}
span>script>

The adframe.js document only contains one line unsetting the adblock Boolean:

var adblock = false;

This seems to work pretty well (and you can see it used in many web sites), but also seems a bit wasteful having to try another script loading and executing. Of course there are many adblocker detection scripts available (some with colourful names), but I wondered what the easiest way to do that is. The trick described in the abovementioned article using a bsarocks class on an element stopped working.

Detecting AdBlock without another request

What does work, though is the following and you can see it in action and get the code on GitHub:

var test = document.createElement('div');
test.innerHTML = ' ';
test.className = 'adsbox';
document.body.appendChild(test);
window.setTimeout(function() {
  if (test.offsetHeight === 0) {
    document.body.classList.add('adblock');
  }
  test.remove();
}, 100);

The trick is the following:

  • You create an element with the class adsbox (as defined as a removable element in the definition file of AdBlock Plus)
  • You add it to the document and after a short while you read its offsetHeight
  • If AdBlock is installed, the element won’t have any height.

You can see it in action here:

detecting adblock

Ethics?

Play nice with this. I’m not going into the ethics of ad blocking or detecting ad blockers. Fact is, we started an arms race with that nobody can win. The more we block, the more aggressive ads will get. As users we should vote with our voice and tell companies that we hate their aggressive advertising. We should also start to understand that nothing is free. As publishers, we should start thinking that showing ads as our only revenue stream is a doomed way of monetisation unless we play dirty.

https://www.christianheilmann.com/2015/12/25/detecting-adblock-without-an-extra-http-overhead/


Seif Lotfy: Skizze - A probabilistic data-structures service and storage (Alpha)

Пятница, 25 Декабря 2015 г. 12:53 + в цитатник
Skizze - A probabilistic data-structures service and storage (Alpha)

At my day job we deal with a lot of incoming data for our product, which requires us to be able to calculate histograms and other statistics on the data-stream as fast as possible.

One of the best tools for this is Redis, which will give you 100% accuracy in O(1) (except for its HyperLogLog implementation which is a probabilistic data-structure). All in all Redis does a great job.
The problem with Redis for me personally is that, when using it for 100 of millions of counters, I could end up with Gigabytes of memory.

I also tend to use Top-K, which is not implemented in Redis but via Lua scripting can be built on top of the ZSet data-structure. The Top-K data-structure is used to keep track of the top "k" heavy hitters in a stream without having to keep track all "n" flows (k < n), with a O(1) complexity.

Anyhow, dealing with a massive amount of data the interest is most of the time in heavy hitters, that could be estimated while using less memory with an O(1) complexity for reading and writing (that is if you don't care about a count being 124352435 or 124352011 because on the UI of an app you will be showing "over 124 Million").

There are a lot of algorithms floating around and used to solve counting, frequency, membership and top-k problems, which in practice are implemented and used as part of a data-stream pipeline where stuff is counted, merged then stored.

I couldn't find a one-stop-shop service to fire & forget my data at.

Basically in need of a solution where I can set up sketches to answer cardinality, frequency, membership as well as ranking queries about my data-stream (without having to reimplement the algorithm in a pipeline embedded in storm, spark, etc...) led to the development of Skizze (which is in alpha state).

What is Skizze?

Skizze (['sk

http://geekyogre.com/skizze-a-probabilistic-data-structures-service-and-storage/


Richard Newman: Syncing and storage on three platforms

Четверг, 24 Декабря 2015 г. 21:31 + в цитатник

As it’s Christmas, I thought I’d take a moment to write down my reflections on Firefox Sync’s iterations over the years. This post focuses on how they actually sync — not the UI, not the login and crypto parts, but how they decide that something has changed and what they do about it.

I’ve been working on Sync for more than five years now, on each of its three main client codebases: first desktop (JavaScript), then Android (built from scratch in Java), and now on iOS (in Swift).

Desktop’s overall syncing strategy is unchanged from its early life as Weave.

Partly as a result of Conway’s Law writ large — Sync shipped as an add-on, built by the Services team rather than the Firefox team, with essentially no changes to Firefox itself — and partly for good reasons, Sync was separate from Firefox’s storage components.

It uses Firefox’s observer notifications to observe changes, making a note of changed records in what it calls a Tracker.

This is convenient, but it has obvious downsides:

  • From an organizational perspective, it’s easy for developers to disregard changes that affect Sync, because the code that tracks changes is isolated. For example, desktop Sync still doesn’t behave correctly in the presence of fancy Firefox features like Clear Recent History, Clear Private Data, restoring bookmark backups, etc.
  • Sync doesn’t get observer notifications for all events. Most notably, bulk changes sometimes roll-up or omit events, and it’s always possible for code to poke at databases directly, leaving Sync out of the loop. If a Places database is corrupt, or a user replaces it manually, Sync’s tracking will be wrong. This is almost inevitable when sync metadata doesn’t live with the data it tracks.
  • Sync doesn’t track actual changes; it tracks changed IDs. When a sync occurs, it goes to storage to get a current representation of the changed record. (If the record is missing, we assume it was deleted.) This makes it very difficult to do good conflict resolution.
  • In order to avoid cycles, Sync stops listening for events while it’s syncing. That means it misses any changes the user makes during a sync.
  • Similarly, it doesn’t see changes that happen before it registers its observers, e.g., during the first few seconds of using the browser.

Beyond the difficulties introduced by a reliance on observers, desktop Sync took some shortcuts 1: it applies incoming records directly and non-transactionally to storage, so an interrupted sync leaves local storage in a partial state. That’s usually OK for unstructured data like history — it’ll try again on the next sync, and eventually catch up — but it’s a bad thing for something structured like bookmarks, and can still be surprising elsewhere (e.g., passwords that aren’t consistent across your various intranet pages, form fields that are mismatched so you get your current street address and your previous city and postal code).

During the last days of the Services team, Philipp, Greg, myself, and others were rethinking how we performed syncs. We settled on a repository-centric approach: records were piped between repositories (remote or local), abstracting away the details of how a repository figured out what had changed, and giving us the leeway to move to a better internal structure.

That design never shipped on desktop, but it was the basis for our Sync implementation on Android.

Android presented some unique constraints. Again, Conway’s Law applied, albeit to a lesser extent, but also the structure of the running code had to abide by Android’s ContentProvider/SyncAdapter/Activity patterns.

Furthermore, Fennec was originally planning to support Android’s own internal bookmark and history storage, so its internal databases mirrored that schema. You can still see the fossilized remnants of that decision in the codebase today. When that plan was nixed, the schema was already starting to harden. The compromise we settled on was to use modification timestamps and deletion flags in Fennec’s content providers, and use those to extract changes for Sync in a repository model.

Using timestamps as the basis for tracking changes is a common error when developers hack together a synchronization system. They’re convenient, but client clocks are wrong surprisingly often, jump around, and lack granularity. Clocks from different devices shouldn’t be compared, but we do it anyway when reconciling conflicts. Still, it’s what we had to work with at the time.

The end result is over-engineered, fundamentally flawed, still directly applies records to storage, but works well enough. We have seen dramatically fewer bugs in Android Sync than we saw in desktop Sync between 2010 and 2012. I attribute some of that simply to the code having been written for production rather than being a Labs project (the desktop bookmark sync code was particularly flawed, and Philipp and I spent a lot of time making it better), some of it to lessons learned, and some of it to better languages and tooling — Java and Eclipse produce code with fewer silly bugs 2 than JavaScript and Vim.

On iOS we had the opportunity to learn from the weaknesses in the previous two implementations.

The same team built the frontend, storage, and Sync, so we put logic and state in the right places. We track Sync-related metadata directly in storage. We can tightly integrate with bulk-deletion operations like Clear Private Data, and change tracking doesn’t rely on timestamps: it’s an integral part of making the change itself.

We also record enough data to do proper three-way merges, which avoids a swath of quiet data loss bugs that have plagued Sync over the years (e.g., recent password changes being undone).

We incrementally apply chunks of records, downloaded in batches, so we rarely need to re-download anything in the case of mid-sync failures.

And we buffer downloaded records where appropriate, so the scary part of syncing — actually changing the database — can be done locally with offline data, even within a single transaction.

Storage on iOS is significantly more involved as a result: we have sync_status columns on each table, and typically have two tables per datatype to track the original shared parent of a row. Bookmark sync is shaping up to involve six tables. But the behavior of the system is dramatically more predictable; this is a case of modeling essential complexity, not over-complicating. So far the bug rate is low, and our visibility into the interactions between parts of the code is good — for example, it’s just not possible for Steph to implement bulk deletions of logins without having to go through the BrowserLogins protocol, which does all the right flipping of change flags.

In the future we’re hoping to see some of the work around batching, use of in-storage tracking flags, and three-way merge make it back to Android and eventually to desktop. Mobile first!

Notes:

  1. My feeling is that Weave was (at least from a practical standpoint) originally designed to sync two desktops with good network connections, using cheap servers that could die at any moment. That attitude doesn’t fit well with modern instant syncing between your phone, tablet, and laptop!

http://160.twinql.com/syncing-and-storage-on-three-platforms/


Gervase Markham: Hallelujah!

Четверг, 24 Декабря 2015 г. 20:00 + в цитатник

Wishing all friends and Mozillians a very merry Christmas! May there be peace and goodwill to all. Here’s a new, spine-tingling version of an old classic:

"A Hallelujah Christmas" by Cloverton from Ross Wooten on Vimeo.

http://feedproxy.google.com/~r/HackingForChrist/~3/jRyh4PluWiM/


Michael Kohler: German-speaking Community mid-term planning

Четверг, 24 Декабря 2015 г. 19:44 + в цитатник

Mozilla’s Participation Team has started to do “mid-term plannings” with a few focus communities back in September. The goal was to identify potential and goals for a six month plan which would then be implemented with the help of all the community. Since Germany is one of the focus markets for Firefox, it’s clear that the German-speaking community was part of that as well

Everything started out at the end of September, when we formed a focus group. Everybody was invited to join the focus group to brainstorm about possible plans we can set in stone to drive Mozilla’s mission forward. I’d like to thank everybody who chimed in with their ideas and thoughts on the German-speaking community and its future in our own Discourse category:

After the community meetup at the beginning of the year we had a lot of momentum which enabled us to get quite a lot done. Unfortunately this momentum has decreased over time, with a low since September (my opinion). Therefore our main areas we picked for our mid-term plans focused on internal improvements we can make, so we can focus on Mozilla top-organizational goals once we have implemented our improvements. This doesn’t mean that the German-speaking community won’t focus on product or mission, but it’s just not where we can commit as a whole community right now.

We have identified four areas we’d like to focus, which I will explain in detail below. Those are documented (in German) on a Wiki page as well to be as transparent as possible. We also asked for feedback through the community list and didn’t get any responses that would say anything against this plan.

Community Structure

In 6 months it’s clear for new and existing contributors who is working in which functional area and who to contact if there are any questions. New contributors know which areas need help and know about good first contributions to do.

Goals:

  • Understandable documentation of every contribution area the German-speaking community is active in. At least 60% of the areas are documented initially.
  • There are contact persons listed per contribution area with clear means of contact. At least 80% of the initially defined areas have at least one contact person for new contributors. For the three biggest areas there are at least two contact persons.
  • Handling of new contributors is defined clearly for all contribution areas, including responsibilities for individuals and groups. The onboarding process is clearly specified and we get at least two new long-term contributors per area. These new contributors can be onboarded within a few weeks with the help of the contact persons as mentors. Further mentors can be defined without them needed to be “contact persons”.

Website

In 6 months the mozilla.de website is the base for all information about Mozilla, its local community and contribution possibilities. Users of the website get valuable information about the community and find contribution possibilities which can be started without a lot of time investment to get used to the product. The website is the portal to the community.

Goals:

  • The website clearly states the possibilities to contribute to the German-speaking community (even if this is only a link to a well defined /contribute page)
  • The website lists all current Mozilla product and projects
  • The content defined in February 2015 is re-evaluated and incorporated as needed
  • The website is the main entry point to the community and promoted as such
  • Through the new website we get at least 10% of new contributors which found us trough it

Meetings / Updates

In 6 months discussions among the community members are well distributed. New topics are started by a broad basis and topics are being discussed by a wide range of contributors.

Goals:

  • There are at least 6 active participants per meeting
  • The meeting is structured for efficiency and brings in a reasonable ratio between discussion and update topics. There are enough enough discussion points so that updates can be treated as “read only” in 60% of the time.
  • The satisfaction of the participants who would like to join is increased by 30%
  • There are at least 10 unique participants in discussions on the mailing list

Social Media

In 6 months the German-speaking community is active on the most important social media channels and represents Mozilla’s mission and the community achievements to the public. Followers learn more about the community and learn about the newest updates and initiatives the community is supporting. Additionally these channels are used to promote urgent call-to-actions.

Goals:

  • The different channels are clearly separated and the user knows what content needs to be expected.
  • We have at least 1200 followers with @mozilla_deutsch, @MozillaDe and @FirefoxDe (not unique followers)
  • We have at least 750 “likes” on our Facebook page
  • We keep users engaged and updated with at least 8 tweets per month per channel
  • There are at least 3 maintainers for the different accounts

 

To track the progress we created a GitHub repository in our organization, where everybody can create issues to track a certain task. There are four labels which make it possible to filter for a specific improvement area. Of course, feel free to create your own issues in this GitHub repo as well, even if it might not be 100% tied to the goals, but every contribution counts!

I have put together a share-able summary slides for easy consumption in case you don’t want to forward the link to this blog post.

Even though I’m going to focus my time on the Mozilla Switzerland community, I will still help with and track the progress we’re doing with the mid-term plan implementations.

Feel free to get in touch with any of the focus group members linked above or the community-german mailing list in general for any questions you might have.

https://michaelkohler.info/2015/german-speaking-community-mid-term-planning


Ben Hearsum: Configuring uWSGI to host an app and static files

Четверг, 24 Декабря 2015 г. 17:44 + в цитатник

This week I started using uWSGI for the first time. I'm in the process of switching Balrog from Vagrant to Docker, and I'm moving away from Apache in the process. Because of Balrog's somewhat complicated Apache config this ended up being more difficult than I thought. Although uWSGI's docs are OK, I found it a little difficult to put them into practice without examples, so here's hoping this post will help others in similar situations.

Balrog's Admin app consists of a pretty standard Python WSGI app, and a static Angular app hosted on the same domain. To complicate matters, the version of Angular that we use does not support being hosted anywhere except the root of the domain. It took a bit of futzing, but we came up with an Apache config to host both of these pieces on the same domain pretty quickly:


    ServerName balrog-admin.mozilla.dev
    DocumentRoot /home/vagrant/project/ui/dist/

    # Rewrite virtual paths in the angular app to the index page
    # so that refreshes/linking works, while leaving real files
    # such as the js/css alone.
    home/vagrant/project/ui/dist>
        RewriteEngine On
        RewriteCond %{REQUEST_FILENAME} -f [OR]
        RewriteCond %{REQUEST_FILENAME} -d

        RewriteRule ^ - [L]
        RewriteRule ^ index.html [L]
    

    # The WSGI app is rooted at /api
    WSGIScriptAlias /api /home/vagrant/project/admin.wsgi
    WSGIDaemonProcess aus4-admin processes=1 threads=1 maximum-requests=50 display-name=aus4-admin
    WSGIProcessGroup aus4-admin
    WSGIPassAuthorization On

    # The WSGI app relies on the web server to do the authentication, and will
    # bail if REMOTE_USER isn't set. To simplify things, we just set this
    # variable instead of prompting for auth.
    SetEnv REMOTE_USER balrogadmin

    LogLevel Debug
    ErrorLog "|/usr/sbin/rotatelogs /var/log/httpd/balrog-admin.mozilla.dev/error_log_%Y-%m-%d 86400 -0"
    CustomLog "|/usr/sbin/rotatelogs /var/log/httpd/balrog-admin.mozilla.dev/access_%Y-%m-%d 86400 -0" combined

Translating this to uWSGI took way longer than expected. Among the problems I ran into were:

  • Using --env instead of --route's addvar action to set REMOTE_USER (--env turns out to be for passing variables to the overall WSGI app).
  • Forgetting to escape "$" when passing routes on the command line, which caused my shell to interpret variables intended for uWSGI
  • Trying to rewrite URLs to a static path, which I only discovered is invalid after stumbling on an old mailing list thread.
  • Examples from uWSGI's own documentation did not work! I discovered that depending on how it was compiled, you may need to pass "--plugin python,http" to give all of the necessary command line options for what I was doing.

After much struggle, I came up with an invocation that worked exactly the same as the Apache config:

uwsgi --http :8080 --mount /api=admin.wsgi --manage-script-name --check-static /app/ui/dist --static-index index.html --route "^/.*$ addvar:REMOTE_USER=balrogadmin" --route-if "startswith:\${REQUEST_URI};/api continue:" --route-if-not "exists:/app/ui/dist\${PATH_INFO} static:/app/ui/dist/index.html"

There's a lot crammed in there, so let's break it down:

  • --http :8080 tells uWSGI to listen on port 8080
  • --mount /api=admin.wsgi roots the "admin.wsgi" app in /api. This means that when you make a request to http://localhost:8080/api/foo, the application sees "/foo" as the path. If there was no Angular app, I would simply use "--wsgi-file admin.wsgi" to place the app at the root of the server.
  • --manage-script-name causes uWSGI to rewrite PATH_INFO and SCRIPT_NAME according to the mount point. This isn't necessary if you're not using "--mount".
  • --check-static /app/ui/dist points uWSGI at a directory of static files that it should serve. In my case, I've pointed it at the fully built Angular app. With this, requests such as http://localhost:8080/js/app.js returns the static file from /app/ui/dist/js/app.js.
  • --static-index index.html tells uWSGI to serve index.html when a request for a directory is made - the default is to 404, because there's no built-in directory indexing.
  • The --route's chain together, and are evaluated as follows:
  • If the requested path matches ^/.*$ (all paths will), set the REMOTE_USER variable to balrogadmin.
  • If the REQUEST_URI starts with /api do not process any more --route's; just satisfy the request. All requests intended for the WSGI app will end up matching here. REQUEST_URI is used instead of PATH_INFO because the latter is written by --manage-script-name
  • If the requested file does not exist in /app/ui/dist, serve /app/ui/dist/index.html instead. PATH_INFO and REQUEST_URI will still point at the original file, which lets Angular interpret the virtual path and serve the correct thing.

In the end, uWSGI seems to be one of the things that's very scary when you first approach it (I count about 750 command line arguments), but is pretty easy to understand when you get to know it a better. This is almost the opposite of Apache - I find it much more approachable, perhaps because there's such a littany of examples out there, but things like mod_rewrite are very difficult for me to understand after the fact, at least compared to uWSGI's --route's.

http://hearsum.ca/blog/configuring-uwsgi-to-host-an-app-and-static-files.html



Поиск сообщений в rss_planet_mozilla
Страницы: 472 ... 227 226 [225] 224 223 ..
.. 1 Календарь