-Поиск по дневнику

Поиск сообщений в rss_planet_mozilla

 -Подписка по e-mail

 

 -Постоянные читатели

 -Статистика

Статистика LiveInternet.ru: показано количество хитов и посетителей
Создан: 19.06.2007
Записей:
Комментариев:
Написано: 7

Planet Mozilla





Planet Mozilla - https://planet.mozilla.org/


Добавить любой RSS - источник (включая журнал LiveJournal) в свою ленту друзей вы можете на странице синдикации.

Исходная информация - http://planet.mozilla.org/.
Данный дневник сформирован из открытого RSS-источника по адресу http://planet.mozilla.org/rss20.xml, и дополняется в соответствии с дополнением данного источника. Он может не соответствовать содержимому оригинальной страницы. Трансляция создана автоматически по запросу читателей этой RSS ленты.
По всем вопросам о работе данного сервиса обращаться со страницы контактной информации.

[Обновить трансляцию]

Gregory Szorc: Mercurial Server Hiccup 2014-11-06

Пятница, 07 Ноября 2014 г. 14:00 + в цитатник

We had a hiccup on hg.mozilla.org yesterday. It resulted in prolonged tree closures for Firefox. Bug 1094922 tracks the issue.

What Happened

We noticed that many HTTP requests to hg.mozilla.org were getting 503 responses. On initial glance, the servers were healthy. CPU was below 100% utilization, I/O wait was reasonable. And there was little to no swapping. Furthermore, the logs showed a healthy stream of requests being successfully processed at levels that are typical. In other words, it looked like business as usual on the servers.

Upon deeper investigation, we noticed that the WSGI process pool on the servers was fully saturated. There were were 24 slots/processes per server allocated to process Mercurial requests and all 24 of them were actively processing requests. This created a backlog of requests that had been accepted by the HTTP server but were waiting an internal dispatch/proxy to WSGI. To the client, this would appear as a request with a long lag before response generation.

Mitigation

This being an emergency (trees were already closed and developers were effectively unable to use hg.mozilla.org), we decided to increase the size of the WSGI worker pool. After all, we had CPU, I/O, and memory capacity to spare and we could identify the root cause later. We first bumped worker pool capacity from 24 to 36 and immediately saw a significant reduction in the number of pending requests awaiting a WSGI worker. We still had spare CPU, I/O, and memory capacity and were still seeing requests waiting on a WSGI worker, so we bumped the capacity to 48 processes. At that time, we stopped seeing worker pool exhaustion and all incoming requests were being handed off to a WSGI worker as soon as they came in.

At this time, things were looking pretty healthy from the server end.

Impact on Memory and Swapping

Increasing the number of WSGI processes had the side-effect of increasing the total amount of system memory used by Mercurial processes in two ways. First, more processes means more memory. That part is obvious. Second, more processes means fewer requests for each process per unit of time and thus it takes longer for each process to reach its max number of requests being being reaped. (It is a common practice in servers to have a single process hand multiple requests. This prevents overhead associated with spawning a new process and loading possibly expensive context in it.)

We had our Mercurial WSGI processes configured to serve 100 requests before being reaped. With the doubling of WSGI processes from 24 to 48, those processes were lingering for 2x as long as before. Since the Mercurial processes grow over time (they are aggressive about caching repository data), this was slowly exhausting our memory pool.

It took a few hours, but a few servers started flirting with high swap usage. (We don't expect the servers to swap.) This is how we identified that memory use wasn't sane.

We lowered the maximum requests per process from 100 to 50 to match the ratio increase in the WSGI worker pool.

Mercurial Memory "Leak"

When we started looking at the memory usage of WSGI processes in more detail, we noticed something strange: RSS of Mercurial processes was growing steadily when processes were streaming bundle data. This seemed very odd to me. Being a Mercurial developer, I was pretty sure the behavior was wrong.

I filed a bug against Mercurial.

I was able to reproduce the issue locally and started running a bisection to find the regressing changeset. To my surprise, this issue has been around since Mercurial 2.7!

I looked at the code in question, identified why so much memory was being allocated, and submitted patches to stop an unbounded memory growth during clone/pull and to further reduce memory use during those operations. Both of those patches have been queued to go in the next stable release of Mercurial, 3.2.1.

Mercurial 3.2 is still not as memory efficient during clones as Mercurial 2.5.4. If I have time, I'd like to formulate more patches. But the important fix - not growing memory unbounded during clone/pull - is in place.

Armed with the knowledge that Mercurial is leaky (although not a leak in the traditional sense since the memory was eventually getting garbage collected), we further reduced the max requests per process from 50 to 20. This will cause processes to get reaped sooner and will be more aggressive about keeping RSS growth in check.

Root Cause

We suspect the root cause of the event is a network event.

Before this outage, we rarely had more than 10 requests being served from the WSGI worker pool. In other words, we were often well below 50% capacity. But something changed yesterday. More slots were being occupied and high-bandwidth operations were taking longer to complete. Kendall Libby noted that outbound traffic dropped by ~800 Mbps during the event. For reasons that still haven't been identified, the network became slower, clones weren't being processed as quickly, and clients were occupying WSGI processes for longer amounts of time. This eventually exhausted the available process pool, leading to HTTP 503's, intermittent service availability, and a tree closure.

Interestingly, we noticed that in-flight HTTP requests are abnormally high again this morning. However, because the servers are now configured to handle the extra capacity, we seem to be powering through it without any issues.

In Hindsight

You can make the argument that the servers weren't configured to serve as much traffic as possible. After all, we were able to double the WSGI process pool without hitting CPU, I/O, and memory limits.

The servers were conservatively configured. However, the worker pool was initially configured at 2x CPU core count. And as a general rule of thumb, you don't want your worker pool to be much greater than CPU count because that introduces context switching and can give each individual process a smaller slice of the CPU to process requests, leading to higher latency. Since clone operations often manage to peg a single CPU core, there is some justification for keeping the ratio of WSGI workers to CPU count low. Furthermore, we rarely came close to exhausting the WSGI worker pool before. There was little to no justification for increasing capacity to a threshold not normally seen.

But at the same time, even with 4x workers to CPU cores, our CPU usage rarely flirts with 100% across all cores, even with the majority of workers occupied. Until we actually hit CPU (or I/O) limits, running a high multiplier seems like the right thing to do.

Long term, we expect CPU usage during clone operations to drop dramatically. Mike Hommey has contributed a patch to Mercurial that allows servers to hand out a URL of a bundle file to fetch during clone. So, a server can say I have your data: fetch this static file from S3 and then apply this small subset of the data that I'll give you. When properly deployed and used at Mozilla, this will effectively drop server-side CPU usage for clones to nothing.

Where to do Better

There was a long delay between the Nagios alerts firing and someone with domain-specific knowledge looking at the problem.

The trees could have reopened earlier. We were pretty confident about the state of things at 1000. Trees opened in metered mode at 1246 and completely at 1909. Although, the swapping issue wasn't mitigated until 1615, so you can argue that being conservative on the tree reopening was justified. There is a chance that full reopening could have triggered excessive swap and another round of chaos for everyone involved.

We need an alert on WSGI pool exhaustion. It took longer than it should have to identify this problem. However, now that we've encountered it, it should be obvious if/when it happens again.

Firefox release automation is the largest single consumer of hg.mozilla.org. Since they are operating thousands of machines, any reduction in interaction or increase in efficiency will result in drastic load reductions on the server. Chris AtLee and Jordan Lund have been working on bug 1050109 to reduce clones of the mozharness and build/tools repositories, which should go a long way to dropping load on the server.

Timeline of Events

All times PST.

November 6

  • 0705 - First Nagios alerts fire
  • 0819 - Trees closed
  • 0915 - WSGI process pool increased from 24 to 36
  • 0945 - WSGI process pool increased from 36 to 48
  • 1246 - Trees reopen in metered mode
  • 1615 - Decrease max requests per process from 100 to 50
  • 1909 - Trees open completely

November 7

  • 0012 - Patches to reduce memory usage submitted to Mercurial
  • 0800 - Mercurial patches accepted
  • 0915 - Decrease max requests per process from 50 to 20

http://gregoryszorc.com/blog/2014/11/07/mercurial-server-hiccup-2014-11-06


Pierros Papadeas: Mozilla Arabic Meetup 2014 and thoughts on Regional Support

Пятница, 07 Ноября 2014 г. 13:37 + в цитатник

Last weekend I had the pleasure to be among the Mozilla Arabic meetup for their annual community meeting, this time in Istanbul, Turkey.

The meetup schedule was packed for two full days, and we barely had time to cover all planned items. We made it though, thanks to the fantastic organizing team (Melek, Sofien, Majda, Rami (who joined remotely), Migdadi and Nefzaoui)

Note to self #1 This is once again a reminder that such 30-people meetups that happen annually (or in less frequency) need to run beyond 2 days. The addition of half a day on Friday would tremendously help, enabling everyone to sync up, bringing people up to speed and informing the schedule of the next two days.

The first day was dedicated on meta-community organization issues. Arabic community is a group of regional communities that are coming together under same goals (especially around l10n). The challenge on having such a meta-community is that the regional ones already have structure, leadership, pace and goals in place, and those might not necessarily be compatible between each other. We initially spent some time to determine the shared functions, roles and goals that should be dealt on a meta-community level rather then individual community one (things like: l10n oversight, Arabic community visibility, cross-community events and activities etc). The structure proposed (which I totally support) is forming a coordination committee with a rolling chair. Each community gets to be the chairing (“hosting”) one, driving and coordinating the meta-community for a period of 6-months. Then another community takes over.

The notable pros of this approach is the shared load over time, the visibility this brings to individual communities, the helpful exposure to different coordination styles and the sense of involvement and leadership all communities will get to experience. The ball is already rolling with this approach and a meeting next week will determine the first chairing community and finalize the way forward.

IMG_20141102_235026-SMILESecond day was more project specific. We had 3 core themes (L10n, FirefoxOS and Webmaker) and we split up in groups to have sessions on those. Partially training, partially brainstorming on next activities on the region, it was a productive experience for both participants and session owners. Haven’t showcased WebIDE to people? Introduce them to the magic of developing apps with Firefox Desktop and watch them drool.

During the meetup we also had a long session on participation and community building (which was kinda different from the approach taken on previous meetings). This time we introduced the idea of “Innovation from the edges” to people and brainstormed under two arcs: “Innovative ideas that you would like to work on” and “Ways that the rest of the Mozilla project could help you“.

Stating with the realization that Mozilla Project (supported by Mozilla Corporation and Mozilla Foundation) could not plan, execute, innovate and support all possible activities and projects that advance our Mozilla Mission, we let people loose to come up with regional (and global) activities and projects that would bring innovation to Mozilla and help us advance our mission. The response was enthusiastic and informing. People quickly came up with ideas that they would like to work on ranging from engineering projects to partnerships with other projects on the ground. More interestingly, patterns emerged under the arc of “how the rest of Mozilla can support you“. Hands-on training (technical or not), mandate to represent Mozilla, access to tools and systems (in an open way) and resources around IT, were some recurring themes that we identified. All these will be taken back to the Mozilla Community Building team and the appropriate Working Groups to inform our strategy for the near future and enable us to support regional and functional communities better.

Note to self #2 Budget and Swag (our default go-tos for regional support) were not even mentioned on the “how we can support you” session. We may need to rethink many of our assumptions moving forward.

I am confident that the Arabic Community has a solid way forward planned after this meetup, and I can’t wait to see the results of it. As for the learnings that we got out of this weekend, we need to evaluate them and plan the way forward for participation strategy informed by such inputs.

Event wiki page: https://wiki.mozilla.org/ArabicMozillaMeetup/2014
Analysis of the community: https://etherpad.mozilla.org/arabic-meetup-swot
Action plan: https://arabicmozilla.etherpad.mozilla.org/meetup-14-action-plan

http://pierros.papadeas.gr/?p=412


Pierros Papadeas: Systems and Data principles

Пятница, 07 Ноября 2014 г. 12:42 + в цитатник

For a year now the Systems and Data Working Group of Mozilla has been meeting, brainstorming about community building systems, designing and implementing them and pioneering new ways to measure contribution activity across Mozilla.

In the process of evaluating existing systems (like mozillians.org) and creating new ones (like Baloo) it was obvious that we needed a common set of principles to apply on all systems that are in service of community building within Mozilla. That would enable Mozillians to easily access tools and contribute in a way that maximizes impact. We as the Systems and Data Working Group recommend these principles be adapted for all tools used by Mozilla.

The principles written in buckets are:

  • Unified Identity
    • Tools should have single source of truth for people data
      • Integration with HRIS
      • mozillians.org already has staff and volunteer information, so it is a good candidate at the single source of tr
    • Tools should de-duplicate people information by integrating with a single source of truth
    • e.g. Reps: Not integrated with Mozillians.org, lots of duplicate information on two profiles
  • Unified Authentication and Authorization
    • Tools should use a single identity platform that provides permissions-based access to tools (like Mozillians.org)
    • e.g Reps: add people to the Reps group on mozillians.org to give them permission to use rep.mozilla.org as a Rep
  • Accessible Metrics
    • Tools should track each contribution a Mozillian makes and provide it in an accessible way to create a holistic view of contributions
  • Localization
    • Tools should be localized so they are accessible to our global community
  • Education
    • Tools should teach the user how to use the tool, answer common usage questions, and have general documentation
  • Recognition
    • Tools should recognize the contributions that they enable
  • Participation
    • Tools should enable anyone to improve the tool by filing bugs, suggesting ideas and bringing those ideas to life
  • Content de-duplication
    • Tools should de-duplicate the content that is created in those tools, making it accessible to other systems
  • Fun
    • Tools should be personal and written in the Mozilla voice

This has been a collaborative effort involving various stakeholders of tools within Mozilla that have been reviewing those and providing feedback during our meetings. We are seeking more feedback moving forward especially with regards to how those impact the Roadmap of various tools and translate to actual features. Feel free to comment here or join our discussions in the community-building mailing list.

 

http://pierros.papadeas.gr/?p=409


Jared Wein: The Bugs Blocking In-Content Prefs

Пятница, 07 Ноября 2014 г. 04:23 + в цитатник

Paper Firefox!If you’ve been following my blog, you know that there has been a long on-going project to rewrite Firefox’ preferences and move them to a page within the browser.

Work has continued on that front, but it has been moving at a slow pace. Today, representatives from engineering, user experience, and project management met together to form the remaining list of bugs that are blocking us from shipping in-content preferences to the Release channel.

In total, we have 20 bugs blocking the release. There are five different categories that the bugs fit in. Bugs that should be easy to pick up and finish for a new-comer are italicized.

If you’d like to work on one of the above bugs, please click on the bug and read the details. If you have any questions, please post the question in the bug and someone will get back to you. Thanks!


Tagged: firefox, planet-mozilla, ux

http://msujaws.wordpress.com/2014/11/06/the-bugs-blocking-in-content-prefs/


L. David Baron: A possible approach to shorter release cycles

Пятница, 07 Ноября 2014 г. 01:12 + в цитатник

One of the problems we've been facing with Mozilla's release cycle is that it takes a relatively long time for code to get from commit to the mozilla-central repository to get into the hands of our users. It's currently 12-18 weeks, because the current process has four repositories (central, aurora, beta, release) with most code landing on central, and with code shifting from one repository to the next every six weeks, and shipping to Firefox release users when it reaches the release repository:

[Diagram showing flow of code across nightly, aurora, beta, and release repositories, with movement from one to the next every six weeks, and channel populations corresponding to each repository]

In addition to being slower than we'd like, we're not getting quite as much feedback as we'd like since the population of aurora users is relatively small and have habits much more similar to our nightly users than our release users. This means that we don't get feedback about many real-world problems until code reaches the beta channel.

One alternative that's been discussed a few times is to have one fewer channel. People have brought up some drawbacks with that approach, such as that code pulled from mozilla-central the previous day isn't necessarily ready to be shipped to large population of users on the beta channel.

To address that, I'd like to propose an alternative that shortens the path by six weeks (and removes one of the four repository stages), but keeps the four separate populations:

[Diagram showing revised structure with three repositories, differing from current behavior by placing the beta user population on the release repository for the first part of the cycle but then on the aurora repository for the bulk of the cycle]

This model differs from what we do now by eliminating the mozilla-beta repository, and thus removing six weeks from the cycle. Users on the nightly, aurora, and release channels would get code like they do now, tied to one repository. Users on the beta channel, however, could get code from the release repository for the first week or so of the cycle (right after we ship that release), and then get code from the aurora repository (which will become the next release) for the rest of the cycle. In other words, users on the beta channel won't change the version that they're running on merge day, but instead a week (or maybe two) later.

Having the beta users switch repositories has two advantages. First, it means that beta users will be able to beta-test any point releases that we ship in the first week or so of the cycle. Second, it means that we'll have a week or so to fix any serious stability problems in the aurora repository before updating all beta users to it.

To be clear, this is just my proposal for what I think we could do, not something that anyone is already planning to do. But I think it could be a good way to get us a faster release cycle that would allow us to get our work faster into our users' browsers.

http://dbaron.org/log/20141106-release-cycles


Mozilla Fundraising: gear.mozilla.org: The New Public Face of Official Mozilla Gear

Пятница, 07 Ноября 2014 г. 00:36 + в цитатник
The new home of Official Mozilla Gear is coming soon! The site will official launch later this year, and many Mozillians know that this has been a long time coming. It will be built for the public — where anyone … Continue reading

http://fundraising.mozilla.org/gear-mozilla-org-the-new-public-face-of-official-mozilla-gear/


Armen Zambrano: Setting buildbot up a-la-releng (Create your own local masters and slaves)

Четверг, 06 Ноября 2014 г. 17:02 + в цитатник
buildbot is what Mozilla's Release Engineering uses to run the infrastructure behind tbpl.mozilla.org.
buildbot assigns jobs to machines (aka slaves) through hosts called buildbot masters.

All the different repositories and packages needed to setup buildbot are installed through Puppet and I'm not aware of a way of setting my local machine through Puppet (I doubt I would want to do that!).
I managed to set this up a while ago by hand [1][2] (it was even more complicated in the past!), however, these one-off attempts were not easy to keep up-to-date and isolated.

I recently landed few scripts that makes it trivial to set up as many buildbot environments as you want and all isolated from each other.

All the scripts have been landed under the "community" directory under the "braindump" repository:
https://hg.mozilla.org/build/braindump/file/default/community

The main two scripts:

If you call create_community_slaves_and_masters.sh with -w /path/to/your/own/workdir you will have everything set up for you. From there on, all you would have to do is this:
  • cd /path/to/your/own/workdir
  • source venv/bin/activate
  • buildbot start masters/test_master (for example)
  • buildslave start slaves/test_slave
Each paired master and slave have been setup to talk to each other.

I hope this is helpful for people out there. It's been great for me when I contribute patches for buildbot (bug 791924).

As always in Mozilla, contributions are always welcome!

PS 1 = Only tested on Ubuntu. If you want it to port this to other platforms please let me know and I can give you a hand.

PS 2 = I know that there is a repository that has docker images called "tupperware", however, I had these set of scripts being worked on for a while. Perhaps someone wants to figure out how to set a similar process through the docker images.



Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

http://feedproxy.google.com/~r/armenzg_mozilla/~3/4VXeyIF4mQQ/setting-buildbot-up-la-releng-create.html


Henrik Skupin: Firefox Automation report – week 35/36 2014

Четверг, 06 Ноября 2014 г. 13:08 + в цитатник

In this post you can find an overview about the work happened in the Firefox Automation team during week 35 and 36.

Highlights

Due to a lot of Mozmill test failures related to add-on installation and search, we moved from the addons-dev.allizom.org website to the staging website located at addons.allizom.org. Since then we experiencing way lesser test failures, which are most likely network related.

In order to keep up with all test failures, and requests for new tests we started our Bug triage meeting on a weekly basis on Friday. For details we have created an etherpad.

If you are interested in helping us with Mozmill tests, you can now find a list of mentored and good first bugs on the bugsahoy web page.

Because of the app bundle changes on OS X, which were necessary due to the v2 signing requirements, we had to fix and enhance a couple of our automation tools. Henrik updated mozversion, mozmill, and mozmill-automation. We were close to releasing Mozmill 2.0.7.

Individual Updates

For more granular updates of each individual team member please visit our weekly team etherpad for week 35 and week 36.

Meeting Details

If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meetings of week 35 and week 36.

http://www.hskupin.info/2014/11/06/firefox-automation-report-week-35-36-2014/


Alexander Surkov: Yandex.People

Среда, 05 Ноября 2014 г. 22:34 + в цитатник
This might be a little bit strange post. It's not about the web, it's about people working at Yandex. Yandex is a largest IT company in Russia (some also refer to it as Russian Google). They invited me to visit their YaC conference this year to talk about accessibility and the web. I was curious about Yandex and I was interested in update of the accessibility in Russia so I decided to go.

The YaC conference itself was surprisingly short, one day long. Otherwise it looked very much the same as any other IT conference I visited. Maybe it felt a bit overcrowded, about 1500 people, long lines to coffee tables and some manoeuvres to avoid collisions with other people on your way. But that must be quite in Moscow spirit, a much overcrowded city.

I visited the front-end section on the conference. Front-end developers are guys who create accessibility use cases and I had a chance to learn something new. Also I was supposed to participate in panel discussion scheduled at the end of the section. Some talks were interesting, I have a rare chance to look at things from "other" side (after all I'm the browser guy), but some talks were rather strange. For example, one speaker said that the web standards development in Netscape times looked the following way:


I know few persons from Netscape times and no one of them looked that way so I'm sort of skeptic on the statement made. These talks didn't have any time reserved for questions so I was left behind with my comments. Yeah, I have a chance to say it here :)

Next day Yandex invited me to visit their office to give a talk about accessibility. I didn't have a good idea what kind of audience I will talk to so I planned a general talk. I talked about accessibility, desktop, mobile, web, about technologies and future, and I've got a bunch of interesting questions. I've been told that over one hundred persons participated (on-site and off-site) and that was exiting, partially because I didn't really expect the expressed interest.

Russia has WCAG based recommendations but afaik there's no laws that would force business to develop accessible solutions. I realize that Russia is very dynamic country and tomorrow you can get a new law requiring you to make your apps and services accessible but I don't think that's the case. I would guess there are some social stuff like earning kudos making the whole point but I didn't get any evidence of that. They might be targeting to new markets or this is some cultural feature. But they didn't show me their secret labs where they grow up the accessibility awareness so I failed to identify the reasons :)

Anyway, people at Yandex are interested in accessibility as matter of the fact and they know much more about accessibility I could guess, for example, they know well a11y tricks how to make web stuff accessible. But - and that was my impression - they are ready to learn and use any accessibility technology/standard but they are still behind in participation in creation and development of it. You have to share your ideas and thoughts, otherwise your voice is lost.

(Probably out of context but still representative example). I had a chat with one Yandex guy, I said:
- Guys, you do create the web.
- No, we don't.
- But you created so many cool web services, you have so many users.
- (skeptically) Ok, maybe runet.

So yeah, I think they should chime in into accessibility world more, after all they create the use case and that's where accessibility grows up from.

http://asurkov.blogspot.com/2014/11/yandexpeople.html


Asa Dotzler: Foxtrot Update 2014-11-05

Среда, 05 Ноября 2014 г. 22:25 + в цитатник

Hello Foxtrotters!

Our order of Flame devices is expected to arrive in the next week to ten days. In parallel, our integration partner is working on the Foxtrot builds of Firefox OS and the update system that will give y’all monthly development snapshots.

Once the devices order arrives in Mountain View, and I’ve got the Firefox OS Foxtrot image from our partner, I’ll spend a day flashing the image onto the devices, packaging them up and sending them out. It’s my intention to see the devices arrive, flashed and ready to receive the first official Foxtrot testing build over the air by the end this month.

http://asadotzler.com/2014/11/05/foxtrot-update-2014-11-05/


Matt Brubeck: Let's build a browser engine! Part 7: Painting 101

Среда, 05 Ноября 2014 г. 20:55 + в цитатник

I’m returning at last to my series on building a simple HTML rendering engine:

In this article, I will add very basic painting code. This code takes the tree of boxes from the layout module and turns them into an array of pixels. This process is also known as “rasterization.”

Browsers usually implement rasterization with the help of graphics APIs and libraries like Skia, Cairo, Direct2D, and so on. These APIs provide functions for painting polygons, lines, curves, gradients, and text. For now, I’m going to write my own rasterizer that can only paint one thing: rectangles.

Eventually I want to implement text rendering. At that point, I may throw away this toy painting code and switch to a “real” 2D graphics library. But for now, rectangles are sufficient to turn the output of my block layout algorithm into pictures.

Catching Up

Since my last post, I’ve made some small changes to the code from previous articles. These includes some minor refactoring, and some updates to keep the code compatible with the latest Rust nightly builds. None of these changes are vital to understanding the code, but if you’re curious, check the commit history.

Building the Display List

Before painting, we will walk through the layout tree and build a display list. This is a list of graphics operations like “draw a circle” or “draw a string of text.” Or in our case, just “draw a rectangle.”

Why put commands into a display list, rather than execute them immediately? The display list is useful for a several reasons. You can search it for items that will be completely covered up by later operations, and remove them to eliminate wasted painting. You can modify and re-use the display list in cases where you know only certain items have changed. And you can use the same display list to generate different types of output: for example, pixels for displaying on a screen, or vector graphics for sending to a printer.

Robinson’s display list is a vector of DisplayCommands. For now there is only one type of DisplayCommand, a solid-color rectangle:

type DisplayList = Vec<DisplayCommand>;

enum DisplayCommand {
    SolidColor(Color, Rect),
    // insert more commands here
}

To build the display list, we walk through the layout tree and generate a series of commands for each box. First we draw the box’s background, then we draw its borders and content on top of the background.

fn build_display_list(layout_root: &LayoutBox) -> DisplayList {
    let mut list = Vec::new();
    render_layout_box(&mut list, layout_root);
    return list;
}

fn render_layout_box(list: &mut DisplayList, layout_box: &LayoutBox) {
    render_background(list, layout_box);
    render_borders(list, layout_box);
    // TODO: render text

    for child in layout_box.children.iter() {
        render_layout_box(list, child);
    }
}

By default, HTML elements are stacked in the order they appear: If two elements overlap, the later one is drawn on top of the earlier one. This is reflected in our display list, which will draw the elements in the same order they appear in the DOM tree. If this code supported the z-index property, then individual elements would be able to override this stacking order, and we’d need to sort the display list accordingly.

The background is easy. It’s just a solid rectangle. If no background color is specified, then the background is transparent and we don’t need to generate a display command.

fn render_background(list: &mut DisplayList, layout_box: &LayoutBox) {
    get_color(layout_box, "background").map(|color|
        list.push(SolidColor(color, layout_box.dimensions.border_box())));
}

/// Return the specified color for CSS property `name`, or None if no color was specified.
fn get_color(layout_box: &LayoutBox, name: &str) -> Option<Color> {
    match layout_box.box_type {
        BlockNode(style) | InlineNode(style) => match style.value(name) {
            Some(ColorValue(color)) => Some(color),
            _ => None
        },
        AnonymousBlock => None
    }
}

The borders are similar, but instead of a single rectangle we draw four—one for each edge of the box.

fn render_borders(list: &mut DisplayList, layout_box: &LayoutBox) {
    let color = match get_color(layout_box, "border-color") {
        Some(color) => color,
        _ => return // bail out if no border-color is specified
    };

    let d = &layout_box.dimensions;
    let border_box = d.border_box();

    // Left border
    list.push(SolidColor(color, Rect {
        x: border_box.x,
        y: border_box.y,
        width: d.border.left,
        height: border_box.height,
    }));

    // Right border
    list.push(SolidColor(color, Rect {
        x: border_box.x + border_box.width - d.border.right,
        y: border_box.y,
        width: d.border.right,
        height: border_box.height,
    }));

    // Top border
    list.push(SolidColor(color, Rect {
        x: border_box.x,
        y: border_box.y,
        width: border_box.width,
        height: d.border.top,
    }));

    // Bottom border
    list.push(SolidColor(color, Rect {
        x: border_box.x,
        y: border_box.y + border_box.height - d.border.bottom,
        width: border_box.width,
        height: d.border.bottom,
    }));
}

Next the rendering function will draw each of the box’s children, until the entire layout tree has been translated into display commands.

Rasterization

Now that we’ve built the display list, we need to turn it into pixels by executing each DisplayCommand. We’ll store the pixels in a Canvas:

struct Canvas {
    pixels: Vec<Color>,
    width: uint,
    height: uint,
}

impl Canvas {
    /// Create a blank canvas
    fn new(width: uint, height: uint) -> Canvas {
        let white = Color { r: 255, g: 255, b: 255, a: 255 };
        return Canvas {
            pixels: Vec::from_elem(width * height, white),
            width: width,
            height: height,
        }
    }
    // ...
}

To paint a rectangle on the canvas, we just loop through its rows and columns, using a helper method to make sure we don’t go outside the bounds of our canvas.

fn paint_item(&mut self, item: &DisplayCommand) {
    match item {
        &SolidColor(color, rect) => {
            // Clip the rectangle to the canvas boundaries.
            let x0 = rect.x.clamp(0.0, self.width as f32) as uint;
            let y0 = rect.y.clamp(0.0, self.height as f32) as uint;
            let x1 = (rect.x + rect.width).clamp(0.0, self.width as f32) as uint;
            let y1 = (rect.y + rect.height).clamp(0.0, self.height as f32) as uint;

            for y in range(y0, y1) {
                for x in range(x0, x1) {
                    // TODO: alpha compositing with existing pixel
                    self.pixels[x + y * self.width] = color;
                }
            }
        }
    }
}

Note that this code only works with opaque colors. If we added transparency (by reading the opacity property, or adding support for rgba() values in the CSS parser) then it would need to blend each new pixel with whatever it’s drawn on top of.

Now we can put everything together in the paint function, which builds a display list and then rasterizes it to a canvas:

/// Paint a tree of LayoutBoxes to an array of pixels.
fn paint(layout_root: &LayoutBox, bounds: Rect) -> Canvas {
    let display_list = build_display_list(layout_root);
    let mut canvas = Canvas::new(bounds.width as uint, bounds.height as uint);
    for item in display_list.iter() {
        canvas.paint_item(item);
    }
    return canvas;
}

Lastly, we can write a few lines of code using the Rust Image library to save the array of pixels as a PNG file.

Pretty Pictures

At last, we’ve reached the end of our rendering pipeline. In under 1000 lines of code, robinson can now parse this HTML file:

span> class="a">
  span> class="b">
    span> class="c">
      span> class="d">
        span> class="e">
          span> class="f">
            span> class="g">
            

…and this CSS file:

* { display: block; padding: 12px; }
.a { background: #ff0000; }
.b { background: #ffa500; }
.c { background: #ffff00; }
.d { background: #008000; }
.e { background: #0000ff; }
.f { background: #4b0082; }
.g { background: #800080; }

…to produce this:

Yay!

Exercises

If you’re playing along at home, here are some things you might want to try:

  1. Write an alternate painting function that takes a display list and produces vector output (for example, an SVG file) instead of a raster image.

  2. Add support for opacity and alpha blending.

  3. Write a function to optimize the display list by culling items that are completely outside of the canvas bounds.

  4. If you’re familiar with OpenGL, write a hardware-accelerated painting function that uses GL shaders to draw the rectangles.

To Be Continued…

Now that we’ve got basic functionality for each stage in our rendering pipeline, it’s time to go back and fill in some of the missing features—in particular, inline layout and text rendering. Future articles may also add additional stages, like networking and scripting.

I’m going to give a short “Let’s build a browser engine!” talk at this month’s Bay Area Rust Meetup. The meetup is at 7pm tomorrow (Thursday, November 6) at Mozilla’s San Francisco office, and it will also feature talks on Servo by my fellow Servo developers. Video of the talks will be streamed live on Air Mozilla, and recordings will be published there later.

http://limpet.net/mbrubeck/2014/11/05/toy-layout-engine-7-painting.html


John O'Duinn: Chuck Rossi and Dinah McNutt keynotes at RelEng Conf 2014

Среда, 05 Ноября 2014 г. 20:05 + в цитатник

The same great people who brought RelEng Conf 2013 did it again earlier this year with the sold-out-wait-listed RelEng Conf 2014. Hosted at Google’s HQ campus, it was a great combination of academic and battle-hardened down-to-earth no-holds-barred industry presentations and panel sessions.

Unlike my keynote last year, this year I had no presentations, so I was able to relax and soak up the great keynote presentations by Chuck Rossi (RelEng at Facebook), and Dinah McNutt (RelEng at Google), as well as others. These are online, and well worth the watch:

Chuck Rossi’s keynote is here:

Dinah McNutt’s keynote is here:

Closing panel discussion is here:

Two years in a row of greatness from Bram Adams, Christian, Foutse, Kim, Stephany Bellomo, Akos Frohner and Boris Debic means that I’m already looking forward to RelEng Conf 2015. Watch the official conference website, follow @relengcon and book your spot immediately to avoid that sad “oh, I’m still on the wait list” feeling.

John.

http://oduinn.com/blog/2014/11/05/releng-conf-2014/


Christian Heilmann: Taking a break

Среда, 05 Ноября 2014 г. 19:02 + в цитатник

sleeping-red-panda

Four years ago I announced that I will join Mozilla as principal evangelist and I was the happiest person alive. I exclaimed that I want Mozilla to be the “Switzerland of HTML5” and an independent player in the great browser wars around the early days of this new technology revolution. I also announced that I am excited to use my flat more as I can work from home.

Now I am here and I have hardly had the chance to do so as I was busy getting from event to event, training to training and meetup to meetup. And whilst this is exciting it is also superbly taxing. It is time to lean back a bit, relax and have some me time.

I feel the first signs of burnout and I think it is very important to not let a fast and seemingly glamourous lifestyle on the road as an official spokesperson get in the way of finding peace and tranquility. I spoke in my last TEDx talk about getting too excited about one’s own online personality and living for that one and how dangerous that is.

And here I am finding myself being excited about Chris on the road and forgot about Chris who just lets go and leaves things to sort themselves out.

This is why I am taking a break from Mozilla. I am going on a sabbatical for a while and be a spectator watching the rewards of all the work we put in the last years. Firefox’s 10th anniversary is coming and great things are afoot.

I think we’ve shown that we can be the “Switzerland of HTML5” and it is time for me to have some time for myself and see what my colleagues can do. That’s why I am stepping down as “the Mozilla guy” and be plain Chris for a while.

I want to thank all my colleagues over the years for the great time I had. It is amazing to see how many dedicated and gifted people can work together and create something open and beautiful – even in traditionally very closed environments like the mobile space.

I will of course be around and you will be able to meet me at selected events and online. People in London and Stockholm will also see more of me. I will only take it slower from now on until the new year and represent myself and not the large, amazing and wonderful world that is Mozilla. As it stated on one of our summits: it is one Mozilla and many voices. And I will lean back, listen, and enjoy.

http://christianheilmann.com/2014/11/05/taking-a-break/


Tantek Celik: My First Year at November Project

Среда, 05 Ноября 2014 г. 06:00 + в цитатник

Just over a year ago I went to my first November Project San Francisco (NPSF) free workout. I'm not exactly sure why I chose that particular morning of 2013-10-30 to show up but a year later I'm very glad I did. It's the biggest physical fitness change I've made since I first started running in January 2011.

Newbie

Seeing #NPSF chalkings in Golden Gate Park late summer of 2013 and especially meeting Sam Livermore and reading her enthusiastic posts had made me want to check it out. Maybe I decided on a whim the night before.

Confession: I took a bus to my first November Project and it wasn't the only time. I woke up to a 6am alarm, made it to Haight & Masonic by 6:20, realized I wouldn't make it on time, and hopped on the 71 bus that was pulling up. Took it just a few blocks to Haight & Scott then jogged 2 blocks up to the park and ran out of breath on that slight incline. More on that later.

I hiked up to the middle of Alamo Square, barely in time for introductions in the predawn darkness (just-before-PDT-to-PST-changeover). Standing on a rock on the edge of a circle of grass, dressed in a full-body penguin suit, NPSF founder and leader Laura McCloskey told us to hug someone we didn't know, and then introduce ourselves.

Betny Townsend cheerily hugged me as a newbie, the first person I met at November Project. I saw Sam Livermore too. The open kindness of strangers and a familiar face was enough to make a strong positive impression. This was a workout group like no other.

Laura explained the workout, which turned out to be a PR (personal record) Wednesday workout (as I've blogged previously, except thankfully in 2013 without the first lap around Alamo Square). It took me ~45-50 minutes by my watch and pretty much destroyed me. Exhausted and humbled I walked home.

It was way out of my league.

Yearbook Photos & The Huffman

Two weeks later I noticed NPSF was taking yearbook photos so I decided to try it one more time. Same morning timing, took the bus again, ran out of breath again.

November Project participants launch into their first run of the workout.

That's me in the back near the left, with the white cap, red t-shirt, and white shorts, starting my second NPSF workout.

This time we did what Laura called the "Huffman" partner workout, named after its inventor, Jessica Huffman. One person continuously does an exercise like pushups / sit-ups / lunges while the other runs a short downhill/uphill loop in the park as fast as they can, then they tag-off and swap places. We alternated for ~25 minutes working our way through two sets of four exercises if I remember correctly.

Laura had us partner up with someone we didn't know, and that was how I met Erin Hallett, who also warmly welcomed me. I was starting to understand what NP was about. Partner workouts are very different, especially the Huffman. There's something about knowing that your partner is doing exercises non-stop while you're running that makes you push yourself paticularly hard, because you don't want to keep them waiting. And after we finished our sets, we did a final lap around Alamo Square lined up for yearbook photos.

These photos turned out amazing. Rebecca Daniels photographed us in our fiercest post-workout faces, edited them and posted epic black and white headshots of everyone that showed up that day. I'm still using mine on my site and other sites too.

(Reminder: NovemberProject 2014 Yearbook Photos Are Tomorrow!)

Impressed and Scared By Hills

One of the great things about NPSF is all the work the organizers put into not only the workouts themselves, but in documenting them, with group photos and blog posts. When the photos of the hills workouts started showing up, the incredible vistas, the small group of super athletes that participated, it was impressive and inspiring. I knew I could never do that.

I can't run hills. When I started November Project, on all our runs around Alamo Square, downhill: no problem; uphill: I'd jog a few steps, and then have to walk the rest. Hills are scary because not being able to breathe is scary.

Why I Ran Out Of Breath

As embarrassed as I feel admitting to having taken the bus a few times to November Project, that's nothing compared to what I've told very few people about, which is that I grew up with asthma, and still wrestle with exertion induced asthma. In short that means if I start running from a cold start, after a dozen or so steps, my lungs feel anxious, my bronchial tubes start constricting, eventually each breath makes a louder audible wheezing sound, and I have to stop while I can still breathe standing up.

That's why I ran out of breath after two blocks up a slight incline. I live on a reasonably steep street and could not run half a block up without having an attack. The way asthma attacks work for most people, either you have to get medication (i.e. use an inhaler), or you might also be able to rest, calm yourself down (if you've practiced various techniques beforehand), and recover in about 15-30 minutes.

I tend to be fairly stubborn. I also hate admitting to weakness. There's an element of shame to it (even if there shouldn't be), and there's also an element of hey, everybody has issues they're dealing with, mine aren't special, don't look for any sympathy, just do your best. I also refused to run with my inhaler, because I'd rather learn my limits, and build self-confidence within those limits. I knew I could walk home if I had to.

Secret Solo Hill Practice

Going to November Project changed this for me. After participating a couple of times and being frustrated that I (was the only one who) couldn't run up the hills in Alamo Square, I decided to try practicing running uphill by myself even if it was only 25-50 feet at a time.

When you've lived with asthma you learn to recognize what it feels like just before it happens. Hills workouts were out of the question, yet I knew if I very deliberately paced myself, breathed, and listened to that anxious feeling that builds in your lungs, I could push myself to that edge, and back down before an attack manifested. I wondered if repeatedly pushing to that edge might make a difference.

From out my door I counted every house I could run up to before I had to stop and walk. One week I made it up a few houses to the green house. Next week I made it one more to the blue one. Another week the grey one. Then the tan one. Finally I was able to jog to the top of my block, just barely without losing my breath. I stopped and cried. I had run half a block uphill. I felt almost normal.

Why Now

It's something invisible that I live with. I'm not looking for sympathy or any special consideration; perhaps just understanding, and a broader understanding that you never know what anyone is going through, personally, privately, invisibly. We all have our struggles.

I chose to write about this publicly for three reasons:

  1. Laura asked me what's my story. I couldn't tell it without this. It's part of who I am.
  2. Inspiration from Andrew and Shannon's posts of their personal stories & struggles.
  3. Most importantly, if I can help just one more person with asthma believe more in themselves then it's worth it. That they have more potential than they think they do, and to dare, to face the fear, to try, even in small steps, to find their limitations, persist, and maybe even grow beyond them.

My First NPSF Hills Workout

I kept practicing my own personal mini-hills workouts in secret. I kept running up my block, and beyond, up into Buena Vista Park, continuing my progress. Then NPSF announced a hills workout in Dolores Park on January 17th.

Less than three months from my first time at NPSF, I decided to #justshowup to hills. Despite being familiar with Dolores Park, I was scared. I didn't care. I would run what I could, then walk if I had to. Judgment be damned. But of course there was none, no judgment. Everyone was nothing but encouraging.

Yes I took the bus again that morning. It was a much smaller group than Wednesday. I met several NPSF regulars whose consistency had inspired me since I started: Josh Zipin (AKA "Zip"), Greg, Jorge , Pete Kruse, Adrienne, and more.

I ran and made it most of the way up the Church street hill from 18th to 21st streets. I think I walked the last block. Then I ran down and up again. I finished four repeats before our 25 minutes were up. Apparently I could now do hills.

November Project participants dancing at the top of Dolores Park against a backdrop of San Francisco's skyline just as dawn is breaking.

Half Marathon, Running to Hills, and Track

16 days later, emboldened by the progress I'd made at NP, I ran my first half marathon (Kaiser) in 2:22. That particular cold, wet, solitary, painful experience is a story for another blog post. Suffice it to say it's still my PR, and I've been training hard to beat it, hopefully this Sunday at the Berkeley Half.

I started going regularly to hills workouts, getting a ride, driving, carpooling, whatever it took. Finally a little over a month after that first time at hills, I ran with our "rungang" to the first NPSF Corona Heights hills workout.

A week and a half after that I braved our informal trackattack workout and couldn't even keep up on the warmup laps. Didn't care. Just kept showing up and running nearly every week, twice a week at NP, and most Tuesdays at track. In just under 5 months I finally completed a trackattack workout.

Positive Community — Just Show Up

Despite all these personal triumphs, what November Project means to me is positive community: from smiles and eager hugs, to the coast-to-coast friendships, to last-minute Sunday long runs, to our informal #nopasoparungang which now consistently gets people to NPSF at least twice a week.

My friend, one of the first people I met at NPSF, Natalie O'Connor asked me why I run.

I told her, I run because I can. Everytime I walk outside in my running clothes, I know I've broken through limitations I thought I had, thanks to a supportive positive community like no other.

Tantek holding up the NPSF positivity award backlit by the rising sun. Selfie with the NPSF positivity award and NP_NYC. Selfie with the NPSF positivity award and NP_BOS. Selfie with the NPSF positivity award and NP_LAX. The NPSF positivity award and Yoda statue.

http://tantek.com/2014/308/b2/my-first-year-november-project


Amy Tsay: Add-on and App Reviewer Meetup at MozFest 2014

Среда, 05 Ноября 2014 г. 01:10 + в цитатник

Every app submitted to Firefox Marketplace and every add-on submitted to AMO is reviewed by a person; 60-80% of the time, that person is a volunteer.

Each year, the AMMO team endeavors to meet a few of the top volunteer reviewers in person to talk about the past year, get feedback, plan for the coming year, and have a pint or two. This year, we arranged a meetup at MozFest in London.

welcome dinner

We kicked things off with a welcome dinner and drinks, then spent the following day at the London MozSpace having more in-depth discussions. The group was extremely diverse—ten countries were represented among the 12 people who attended. Some reviewers joined in the past year, others have been reviewing for nearly a decade.

One piece of feedback that really stuck with me was that many people think being an app reviewer is the only way to contribute to Marketplace. This means we need to do more to get the word out about the myriad ways one can get involved. But it also means the reviewer program is strong, and widely known, and these are reasons themselves for celebration.

Reviewer Meetup

Notes from the meeting are available on this etherpad. We got some great feedback, and I am really looking forward to tackling some of the action items in the coming weeks.

Afterwards, we attended the MozFest science fair kick-off together, then dispersed over the weekend to explore the event, letting serendipity and our own unique interests guide us.

My interests and chance meetings guided me to make an LED robot, learn how to pick locks, and have fascinating discussions about communities and web literacy. Though I was weakened by flu at the conclusion of the weekend, I came away feeling invigorated by the spirit of Mozilla.


http://mozamy.wordpress.com/2014/11/04/add-on-and-app-reviewer-meetup-at-mozfest-2014/


David Burns: WebDriver Face To Face - TPAC 2014

Среда, 05 Ноября 2014 г. 00:57 + в цитатник

Last week was the 2014 W3C TPAC. For those that don't know, TPAC is a conference where a number of W3C working groups get together in the same venue. This allows for a great amount of discussions between groups and also allows people to see what is coming in the future of the web.

The WebDriver Working Group was part of TPAC this year, like previous years, and there was some really great discussions.

The main topics that were discussed were:

  • We are going to be moving more discussions to the mailing list. This is to prevent people waiting for a Face to face to discuss things
  • The Data model of how data is sent over the wire between the remote and local ends
  • Attributes versus properties -This old chestnut
  • An approach to moving some of the manual tests that are used for W3C specs to automated ones with WebDriver - This is exciting

The meeting minutes for Thursday and Friday

http://www.theautomatedtester.co.uk/blog/2014/webdriver-face-to-face-tpac-2014.html


Tantek Celik: #NovemberProject 2014 Yearbook Photos Tomorrow! #justshowup

Среда, 05 Ноября 2014 г. 00:44 + в цитатник

If you've come to any NovemberProject anywhere, make plans to be at the nearest one this Wednesday to get your yearbook photo. You've earned it.

If you're a runner of any kind or have been curious about NovemberProject, check it out this Wednesday and get your photo taken. Join us.

If I've ever bugged you to come to NovemberProject, and you haven't yet, this is the day to do it. Trust me.

Grid of November Project San Francisco 2013 Yearbook Photos

I went to last year's Yearbook Photos day, had a great time (more on that in another post very soon!), and got a great photo that I'm still using for my site icon and profile photo.

Facebook events - all Wednesday morning at ~6:15am:

Plus thirteen more cities (Check out November-Project.com for the full list). I'll add more direct city-event links as I find them. It looks like there's going to be a beautiful sunrise.

http://tantek.com/2014/308/b1/novemberproject-yearbook-photos-justshowup


Michael Kaply: New Features in the CCK2

Вторник, 04 Ноября 2014 г. 22:58 + в цитатник

If you haven't checked out the CCK2 lately, you should.

One of the coolest features I've added recently is the ability to hide things on any arbitrary window that is opened by Firefox. For instance, if you want to hide the bottom box in the about dialog, you can add "#aboutDialog #bottomBox" to the hidden UI section. You can also use it to hide arbitrary content in about:addons. I've also done major work on the clipboard capabilities API, so it should be more robust. There have also been quite a few bug fixes. You can keep up on all the latest changes here.

Download the latest CCK2 by clicking here.

If you want to request a feature, you can do so on the CCK2 support site. Priority for any requests is given to paying customers.

And if the CCK2 saves you time and money, please consider getting a support contract. It ensures that I'll be able to keep working on the CCK2.

http://mike.kaply.com/2014/11/04/new-features-in-the-cck2/


Mozilla Release Management Team: Firefox 34 beta5 to beta6

Вторник, 04 Ноября 2014 г. 21:04 + в цитатник

  • 42 changesets
  • 98 files changed
  • 2460 insertions
  • 385 deletions

ExtensionOccurrences
js18
cpp16
h15
jsm7
java6
py3
ini3
txt2
in2
xml1
xhtml1
sjs1
mm1
json1
idl1
css1
cc1
c1
build1

ModuleOccurrences
toolkit19
content11
mobile10
services9
browser9
js5
layout4
dom4
ipc3
gfx3
widget1
testing1
modules1
media1

List of changesets:

Richard NewmanBug 1090385 - More robust handling of external intents. r=snorp, a=sledru - 65515de095b8
Mark FinkleBug 895775 - Correctly handle lifecycle in GeckoNetworkManager. r=rnewman a=lmandel - ae19708887ef
Richard NewmanBug 1090385 - Follow-up: fix GeckoAppShell. a=bustage - 0dd6a59ed6a5
Richard NewmanBug 1090385 - Follow-up: fix GeckoApp. a=bustage - 693b7d0c9b36
Richard NewmanBug 1090385 - Follow-up: fix yet more bustage in GeckoApp. a=bustage - 72bdce765298
Andrew McCreightBug 1089833 - Delete reply in MessageChannel::DispatchSyncMessage and DispatchInterruptMessage if channel isn't connected. r=billm, a=lsblakk - 926c3f3f1f3a
Randell JesupBug 1087605 - Don't try to set the priority of the CCApp thread (which doesn't exist). r=bwc, a=lsblakk - e35984b580fb
Boris ZbarskyBug 1087801 - Don't assume the global is a Window in the DOM CSS object. r=bholley, a=lsblakk - ac59c74b9386
Jonathan KewBug 1090869 - Don't collect output glyphs when checking for features involving . r=jdaggett, a=lsblakk - 17d3079dc41f
Mike HommeyBug 1091118 - Also export RANLIB to unbust android builds on mac. r=gps, a=lmandel - 12a8a2d96453
Robert O'CallahanBug 1052900 - Restore -moz-win-exclude-glass handling to the way it worked before. r=tn, a=lsblakk - 73905ff57286
Drew WillcoxonBug 1083167 - Fix FormHistory error in ContentSearch by not passing an empty string to FormHistory.update. r=MattN, a=lmandel - cadb1112c8fb
Doug TurnerBug 1073134 - Be more permissive on OSX 10.9.5 when parental control is on. r=jdm, a=lmandel - 340cfd2affa7
J. Ryan StinnettBug 1090450 - Properly check add-on update state during update interval. r=Mossop, a=lmandel - 06d2090db817
Michael WuBug 1081926 - Fallback on a simple image lookup when the normal lookup fails. r=mattwoodrow, a=lmandel - 546105a6d5c0
Jonathan WattBug 1076910 - Don't use gfxPlatform::GetPlatform() off the main thread. r=Bas, a=sledru - 8977f5061773
Jonathan WattBug 1076910 - Add some error checks to gfxUtils::EncodeSourceSurface. r=Bas, a=sledru - 3c329a6fd0cb
Brian HackettBug 1084280 - Use a local RegExpStack when running the RegExp interpreter. r=jandem, a=dveditz - 631a73cdbc91
Brian HackettBug 1077514 - Execute regexps in the bytecode interpreter if the initial JIT execution was interrupted. r=jandem, a=lmandel - 5238acab8176
Mike de BoerBug 1089011: make sure to only import contacts that are part of the default contacts group. r=MattN a=lmandel - 8b1b897ca39c
Bas SchoutenBug 1064864. Ensure the copying bounds are sane. r=jrmuizel a=sylvestre - d4ad7d727dd6
Georg FritzscheBug 1079341 - Missing yield on async makeDir in FHR state init. r=gps, a=lmandel - d9b49c7ee7fe
Georg FritzscheBug 1064333 - Migrate the FHR client id to the datareporting service. r=gps, a=lmandel - 8fbc0d8bb83d
Georg FritzscheBug 1064333 - Add the stable client id to the telemetry ping. r=froydnj, a=lmandel - ad6d502a38c9
Georg FritzscheBug 1064333 - Only add the stable user id to the ping when FHR upload is enabled. r=froydnj, a=lmandel - ec67776fc5e3
Georg FritzscheBug 1064333 - Init TelemetryPing in tests even if MOZILLA_OFFICIAL is not set. r=froydnj, a=lmandel - efb3c956dfef
Georg FritzscheBug 1086252 - Show stable client id in about:telemetry. r=froydnj, a=lmandel - bda711062d08
Georg FritzscheBug 1069873 - Add counter histogram type. r=froydnj, ba=lmandel - a4db8f39f372
Georg FritzscheBug 1069953 - Part 1: Make min/max/bucket_count optional for nsITelemetry newHistogram(). r=froydnj, ba=lmandel - 56b3e37832b9
Georg FritzscheBug 1069874 - Add keyed histogram types. r=froydnj, ba=lmandel - 3fe1e43c97b8
Georg FritzscheBug 1092219 - Fix keyedHistogram.add() passing the wrong argument to Histogram::Add(). r=froydnj, a=lmandel - aa11e337b8e3
Georg FritzscheBug 1092176 - Add keyed histogram section in about:telemetry. r=froydnj, a=lmandel - e6db2f014e26
Georg FritzscheBug 1089670 - Record searches in Telemetry. r=bwinton, ba=lmandel - 1ca39da5df9d
Paul AdenotBug 1085356 - Better handling of OSX audio output devices switching when SourceMediaStream are present in the MSG. r=jesup a=lmandel - 80b1fc2042df
Randell JesupBug 1085356: Fix Mac audio output changes on older/different macs r=padenot a=lmandel - ddc951a77894
Randell JesupBug 1090415: Whitelist room.co for screensharing rs=mreavy a=lmandel - bf50cf09506c
Randell JesupBug 1091031: Whitelist talky.io & beta.talky.io for screensharing rs=mreavy a=lmandel - 08876d848dcf
Randell JesupBug 1085356: Bustage fix (missing include from merge) r=bustage a=bustage - f6a4136fe0af
Nick AlexanderBug 1068051 - Add high-res device drawables. r=trivial, a=lmandel - 9f2160ac83d5
Richard NewmanBug 1084522 - Don't redefine layout attribute in IconTabWidget. r=lucasr, a=lmandel - 66297e95dc47
Richard NewmanBug 1084516 - Wrap Build.CPU_ABI access in deprecation annotation. r=snorp, a=lmandel - a02835abdd00
Olli PettayBug 1087633 - Filter out XPConnect wrapped input streams. r=bz, a=lmandel - 72938afdf993

http://release.mozilla.org/statistics/34/2014/11/04/fx-34-b5-to-b6.html


Robert Nyman: Hyperlink Helper package for the Atom editor

Вторник, 04 Ноября 2014 г. 16:48 + в цитатник

Knowing your editor is important, and if it’s open source and you can add functionality, even better! Therefore, I dug into Atom from GitHub (which is open source!) to add something I like: a Hyperlink Helper.

As mentioned in my recent posts on Vim and favorite editors, I think it’s great for any developer to experiment and tweak their editor(s) of choice.

Using Atom a bit on the side, I thought I’d look into how to create a package for it. It offers a lot of them already, browsable through the packages web site or through the within the app itself.

Introducing the Hyperlink Helper

Back in the day when I started using TextMate, one of my favorite functions was to have a keyboard shortcut to wrap the selected text with an element and set its href attribute to what’s in the system clipboard. Then when I moved to Sublime Text, someone created the Hyperlink Helper package for Sublime Text.

So, now with Atom, the next natural evolution would be for me to create one for it. :-)

How to install it

Go to Settings in Atom > Packages: Search for Hyperlink Helper.

Functionality

  • Wraps selected text with an anchor element, e.g. Hello becomes Hello
  • Sets the href attribute of that anchor to what’s currently in the system clipboard

How to trigger it

  1. Select some text in the current document
  2. Hit its keyboard shortcut:
    Ctrl + Cmd + L (Mac)
    Ctrl + Alt + L (Windows/Linux)
    or through Packages > hyperlink-helper > link in the Atom top menu

Source code and improvements

The source code is on GitHub – feel free to issue pull requests with improvements!

Getting started building packages for Atom

Does all this sound interesting and you want to build your own package? Then I recommend reading Creating Packages and Create Your First Package in the Atom docs. Atom also offers a way to generate a template package for you through the Package Generator: Generate Package in the Command Palette.

This is triggered with:
Cmd + Shift + p (Mac)
Ctrl + Shift + p (Windows/Linux)

Developer pro tip: in Atom, hit Cmd + . (Mac)/Ctrl + . (Windows/Linux) to show the Keybinding Resolver: great for seeing which command is connected with any keyboard shortcut combination you can come up with! (found via What are the keyboard shortcuts of the Atom editor? – keyboard shortcuts in Atom can also be found via Settings > Keybindings)

Bonus: Hyperlink Helper for Vim

Vim user and want a Hyperlink Helper? Just add this to your .vimrc file and call it via Space + l in Visual mode:

vmap l c"

http://feedproxy.google.com/~r/robertnyman/~3/0byl7aoSQLU/



Поиск сообщений в rss_planet_mozilla
Страницы: 472 ... 93 92 [91] 90 89 ..
.. 1 Календарь