-Поиск по дневнику

Поиск сообщений в rss_planet_mozilla

 -Подписка по e-mail

 

 -Постоянные читатели

 -Статистика

Статистика LiveInternet.ru: показано количество хитов и посетителей
Создан: 19.06.2007
Записей:
Комментариев:
Написано: 7

Planet Mozilla





Planet Mozilla - https://planet.mozilla.org/


Добавить любой RSS - источник (включая журнал LiveJournal) в свою ленту друзей вы можете на странице синдикации.

Исходная информация - http://planet.mozilla.org/.
Данный дневник сформирован из открытого RSS-источника по адресу http://planet.mozilla.org/rss20.xml, и дополняется в соответствии с дополнением данного источника. Он может не соответствовать содержимому оригинальной страницы. Трансляция создана автоматически по запросу читателей этой RSS ленты.
По всем вопросам о работе данного сервиса обращаться со страницы контактной информации.

[Обновить трансляцию]

Henrik Skupin: Firefox Automation report – week 29/30 2014

Понедельник, 08 Сентября 2014 г. 22:37 + в цитатник

In this post you can find an overview about the work happened in the Firefox Automation team during week 29 and 30.

Highlights

During week 29 it was time again to merge the mozmill-tests branches to support the upcoming release of Firefox 31.0. All necessary work has been handled on bug 1036881, which also included the creation of the new esr31 branch. Accordingly we also had to update our mozmill-ci system, and got the support landed on production.

The RelEng team asked us if we could help in setting up Mozmill update tests for testing the new update server aka Balrog. Henrik investigated the necessary tasks, and implemented the override-update-url feature in our tests and the mozmill-automation update script. Finally he was able to release mozmill-automation 2.6.0.2 two hours before heading out for 2 weeks of vacation. That means Mozmill CI could be used to test updates for the new update server.

Individual Updates

For more granular updates of each individual team member please visit our weekly team etherpad for week 29 and week 30.

Meeting Details

If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meetings of week 29 and week 30.

http://www.hskupin.info/2014/09/08/firefox-automation-report-week-29-30-2014/


Benoit Girard: B2G Performance Polish: Unlock Animation (Part 2)

Понедельник, 08 Сентября 2014 г. 20:00 + в цитатник

Our destination

Our starting point is a 200ms unlock delay, and an uniform ~25 FPS animation. Our aim should be a 0ms unlock delay and a uniform 60 FPS (or whatever vsync is). The former we will minimize as much as we can but the latter is entirely possible.

Let’s talk about how we should design a lock screen animation in the optimal case. When we go and apply it in practice we often hit some requirements and constraint that make it impossible to behave like we want but let’s ignore that for a second and discuss we want to get to.

In the ideal case we would have the lockscreen rendered offscreen to a GPU texture(s) set. We would have the background app ready in another GPU texture(s) set. These are ‘Layers’. We place the background app behind the lockscreen. When the transition begins we notify the compositor to start fading out the lockscreen. Keep them around costs memory but if we keep the right thing around we can reduce or eliminate repaints entirely.

Our memory requirement is what’s required by the background app + about one fullscreen layer required for the lockscreen. This should be fine for even low end of B2G phones. Our overdraw should be about 200%-300%, again enough for mobile GPUs to keep up at 60 FPS/vsync.

Ideal Lockscreen Layer Tree

Now let’s look at what we hope our timeline for our Main Thread and our Compositor Thread to look like:

Ideal Unlock Timeline

We want to use Off-Main-Thread-Animation to perform the fade entirely on the Compositor. This will be initiated on the main thread and will require a style flush to set a CSS transform transition. If done right we don’t expect to have to reflow or repaint any part of the page if we properly built the layer like shown in the first figure. Note that the style flush will contribute to the unlock delay (and so will the first composite time as incorrectly shown in the diagram). If we can keep that style flush + first composite in under say 50ms and each composite in 16ms or less then we should have a smooth unlock animation.

Next up let’s look at what’s actually happening in the unlock animation in practice…


http://benoitgirard.wordpress.com/2014/09/08/b2g-performance-polish-unlock-animation-part-2/


Adam Lofting: Something special within ‘Hack the snippet’

Понедельник, 08 Сентября 2014 г. 18:56 + в цитатник

Here are a couple of notes about ‘Hack the snippet‘ that I wanted to make sure got documented.

  1. It significantly changed peoples’ predisposition to Webmaker before they arrived on the site
  2. Its ‘post-interaction’ click-through-rate was equivalent to most one-click snippets

Behind these observations, something special was happening in ‘Hack the snippet’. I can’t tell you exactly what it was that had the end-effect, but it’s worth remembering the effect.

1. It ‘warmed people up’ to Webmaker

  • The ‘Hack the snippet’ snippet
    • was shown to the same audience (Firefox users) as eight other snippet variations we ran during the campaign
    • had the same % of users click through to the landing page
    • had the same on-site experience on webmaker.org as all the other snippet variations we tested (the same landing page, sign-up ask etc)
  • But when people who had interacted with ‘Hack the snippet’ landed on the website, they were more than three times as likely to signup for a webmaker account

Same audience, same engagement rate, same ask… but triple the conversion rate (most regular snippet traffic converted ~2%, ‘Hack the snippet’ traffic converted ~7%).

Something within that experience (and likely the overall quality of it) makes the Webmaker proposition more appealing to people who ‘hacked the snippet’. It could be one of many things: the simplicity, the guided learning, the feeling of power from editing the Firefox start page, the particular phrasing of the copy or many of the subtle design decisions. But whatever it was, it worked.

We need to keep looking for ways to recreate this.

Not everything we do going forwards needs to be a ‘Hack the snippet’ snippet (you can see how much time and effort went into that in the bug).

But when we think about these new-user experiences, we have a benchmark to compare things too. We know how much impact these things can have when all the parts align.

2. The ‘post-interaction’ CTR was as good as most one-click snippets

This is a quicker note:

  • Despite the steps involved in completing the ‘Hack the snippet’ on page activity, the same total number of people clicked through when compared to a standard ‘one-click’ snippet.
  • We got the same % of the audience to engage with a learning activity and then click through to the webmaker site, as we usually get just giving them a link directly to Webmaker
    • This defies most “best practice” about minimizing number of clicks

Again, this doesn’t give us an immediate thing we can repeat, but it gives us a benchmark to build on.

http://feedproxy.google.com/~r/adamlofting/blog/~3/u7RaSKTvEHc/


Lucas Rocha: Introducing dspec

Понедельник, 08 Сентября 2014 г. 17:52 + в цитатник

With all the recent focus on baseline grids, keylines, and spacing markers from Android’s material design, I found myself wondering how I could make it easier to check the correctness of my Android UI implementation against the intended spec.

Wouldn’t it be nice if you could easily provide the spec values as input and get it rendered on top of your UI for comparison? Enter dspec, a super simple way to define UI specs that can be rendered on top of Android UIs.

Design specs can be defined either programmatically through a simple API or via JSON files. Specs can define various aspects of the baseline grid, keylines, and spacing markers such as visibility, offset, size, color, etc.

Baseline grid, keylines, and spacing markers in action.

Baseline grid, keylines, and spacing markers in action.

Given the responsive nature of Android UIs, the keylines and spacing markers are positioned in relation to predefined reference points (e.g. left, right, vertical center, etc) instead of absolute offsets.

The JSON files are Android resources which means you can easily adapt the spec according to different form factors e.g. different specs for phones and tablets. The JSON specs provide a simple way for designers to communicate their intent in a computer-readable way.

You can integrate a DesignSpec with your custom views by drawing it in your View‘s onDraw(Canvas) method. But the simplest way to draw a spec on top of a view is to enclose it in a DesignSpecFrameLayout—which can take an designSpec XML attribute pointing to the spec resource. For example:


    ...

I can’t wait to start using dspec in some of the new UI work we’re doing Firefox for Android now. I hope you find it useful too. The code is available on Github. As usual, testing and fixes are very welcome. Enjoy!

http://lucasr.org/2014/09/08/introducing-dspec/


Doug Belshaw: Web Literacy: More than just coding; an enabling education for our times [EdTech Digest]

Понедельник, 08 Сентября 2014 г. 17:06 + в цитатник

Web Literacy | edtechdigest.com 2014-09-08 14-03-43

Last week, my colleague Lainie Decoursy got in touch wondering if I could write a piece about web literacy. It was a pretty tight turnaround, but given pretty much all I think about during my working hours is web literacy, it wasn’t too much of a big ask!

The result is a piece in EdTech Digest entitled Web Literacy: More than just coding; an enabling education for our times. It’s an overview of Mozilla’s work around Webmaker and, although most of the words are mine, I have to credit my colleagues for some useful edits.

Click here to read the post

I’ve closed comments here to encourage you to add your thoughts on the original post.

http://dougbelshaw.com/blog/2014/09/08/web-literacy-more-than-just-coding-an-enabling-education-for-our-times-edtech-digest/


Robert O'Callahan: rr 2.0 Released

Понедельник, 08 Сентября 2014 г. 17:03 + в цитатник

Thanks to the hard work of our contributors, rr 2.0 has been released. It has many improvements over our 1.0 release:

  • gdb's checkpoint, restart and delete checkpoint commands are supported.
    These are implemented using new infrastructure in rr 2.0 for fast cloning of replay sessions.
  • You can now run debuggee functions from gdb during replay.
    This is a big feature for rr, since normally a record-and-replay debugger will only replay what happened during recording --- and of course, function calls from gdb did not happen during recording. So under the hood, rr 2.0 introduces "diversion sessions", which run arbitrary code instead of following a replay. When you run a debuggee function from gdb, we clone the current replay session to a diversion session, run your requested function, then destroy the diversion and resume the replay.
  • Issues involving Haswell have been fixed. rr now runs reliably on Intel CPU families from Westmere to Haswell.
  • Support for running rr in a VM has been improved. Due to a VMWare bug, rr is not as reliable in VMWare guests as in other configurations, but in practice it still works well.
  • Trace compression has been implemented, with compression ratios of 5-40x depending on workload, dramatically reducing rr's storage and I/O usage.
  • Many many bugs have been fixed to improve reliability and enable rr to handle more diverse workloads.

All the features normally available from gdb now work with rr, making this an important milestone.

The ability to run debuggee functions makes it much easier to use rr to debug Firefox. For example you can dump DOM, frame and layer trees at any point during replay. You can debug Javascript to some extent by calling JS engine helpers such as DumpJSStack(). Some Mozilla developers have successfully used rr to fix real bugs. I use it for most of my Gecko debugging --- the first of my research projects that I've actually wanted to use :-).

Stephen Kitt has packaged rr for Debian.

Considerable progress has been made towards x86-64 support, but it's not ready yet. We expect x86-64 support to be the next milestone.

I recorded a screencast showing a quick demo of rr on Firefox:

http://robert.ocallahan.org/2014/09/rr-20-released.html


Robert O'Callahan: VMWare CPUID Conditional Branch Performance Counter Bug

Понедельник, 08 Сентября 2014 г. 15:51 + в цитатник

This post will be uninteresting to almost everyone. I'm putting it out as a matter of record; maybe someone will find it useful.

While getting rr working in VMWare guests, we ran into a tricky little bug. Typical usage of CPUID. e.g. to detect SSE2 support, looks like this pseudocode:

CPUID(0); // get maximum supported CPUID subfunction M
if (S <= M) {
CPUID(S); // execute subfunction S
}
Thus, CPUID calls often occur in pairs with a conditional branch between them. The bug is that in a VMWare guest, when we count the number of conditional branches executed, the conditional branch between those two CPUIDs is usually (but not always) omitted from the count. We assume this is a VMWare bug because it does not happen on the same hardware outside of a VM, and it does not happen in a KVM-based VM.

Experiments show that some code sequences trigger the bug and other equivalent sequences don't. Single-stepping and other kinds of interference suppress the bug. My best guess is that VMWare optimizes some forms of the above code, perhaps to reduce the number of VM exits, and in so doing skips execution of the conditional branch, without taking into account that this might perturb performance counter values. Admittedly, it's unusual for software to rely on precise performance counter values the way rr does.

This sucks for rr because rr relies on these counts being accurate. We sometimes find that replay diverges because one of these conditional branches was not counted during recording but is counted during replay. (The other way around is possible too, but less frequently observed.) We have some heuristics and workarounds, but it's difficult to fully work around without adding significant complexity and/or slowdown.

The bug is easily reproduced: just use rr to record and replay anything simple. When replaying, rr automatically detects the presence of the bug and prints a warning on the console:

rr: Warning: You appear to be running in a VMWare guest with a bug
where a conditional branch instruction between two CPUID instructions
sometimes fails to be counted by the conditional branch performance
counter. Partial workarounds have been enabled but replay may diverge.
Consider running rr not in a VMWare guest.

Steps forward:

  • Find a way to report this bug to VMWare.
  • Linux hosts can run rr in KVM-based VMs or directly on the host. Xen VMs might work too.
  • Parallels apparently supports PMU virtualization now; if Parallels doesn't have this bug, it might be the best way to run rr on a Mac or Windows host.
  • We can add a "careful mode" that would probably almost always replay successfully, albeit with additional overhead.
  • The bug is less likely to show up once rr supports x86-64. At least in Firefox, CPUID instructions are most commonly used to detect the presence of SSE2, which is unnecessary on x86-64.
  • In practice, recording Firefox in VMWare generally works well without hitting this bug, so maybe we don't need to invest a lot in fixing it.

http://robert.ocallahan.org/2014/09/vmware-cpuid-conditional-branch.html


Daniel Stenberg: Video perhaps?

Понедельник, 08 Сентября 2014 г. 11:06 + в цитатник

I decided to try to do a short video about my current work right now and make it available for you all. I try to keep it short (5-7 minutes) and I’m certainly no pro at it, but I will try to make a weekly one for a while and see if it gets any fun. I’m going to read your comments and responses to this very eagerly and that will help me decide how I will proceed on this experiment.

Enjoy.

http://daniel.haxx.se/blog/2014/09/08/video-perhaps/


Jordan Lund: This Week In Releng - Sept 1st, 2014

Понедельник, 08 Сентября 2014 г. 08:58 + в цитатник

Major Highlights:

Completed work (resolution is 'FIXED'):


In progress work (unresolved and not assigned to nobody):

http://jordan-lund.ghost.io/this-week-in-releng-sept-1st-2014/


Brian Birtles: Animations on Fire @ Graphical Web 2014

Понедельник, 08 Сентября 2014 г. 05:58 + в цитатник

Just recently I had the chance to talk about authoring animations of CSS/SVG for better performance at The Graphical Web 2014. I thought I’d put up the slides here in case they’re useful to others.

In the rare chance that you’re reading this blog directly or the syndicator didn’t eat this bit, you can view the slides right here:

http://brian.sol1.net/svg/2014/09/08/animations-on-fire-graphical-web-2014/


Lukas Blakk: About to do some major learning

Воскресенье, 07 Сентября 2014 г. 19:30 + в цитатник

Tomorrow morning the first ever Ascend Project kicks off in Portland, OR.  I just completed a month-long vacation where we drove from San Francisco out to the Georgian Bay, Ontario (with a few stops along the way including playing hockey in the Cleveland Gay Games) and back again through the top of the US until we arrived here in Portland.  I’m staying in this city for 6 weeks, will be going in to the office *every* day, and doing everything I can to guide & mentor 20 people in their learning on becoming open source contributors.

Going to do my best to write about the experience as this one is all about learning what works and what doesn’t in order to iterate and improve the next pilot which will take place in New Orleans in 2015. It’s been almost a year since I first proposed this plan and got the OK to go for it.  See http://ascendproject.org for posts on the process so far and for updates by the participants.

http://lukasblakk.com/about-to-do-some-major-learning/


Hannah Kane: On Retrospectives

Воскресенье, 07 Сентября 2014 г. 06:29 + в цитатник

Last week I convened a small, cross-functional team for a half hour debrief of the work we’d done together on last month’s Net Neutrality trainings and tweetchat. The trainings and tweetchat were largely successful efforts, but this debrief was to discuss the process of working together.

Here’s how we did it:

  • First I sent around an etherpad with some questions. There was a section for populating a timeline of the entire process from conception to completion. And there were sections for capturing what worked well, and what people felt could be improved upon.
  • As people added their thoughts to the etherpad, it became clear to me that a Vidyo chat would be useful. There were differences of opinion and indications of tension that I felt ought to be surfaced and discussed.
  • Everyone took 30 minutes out of their busy schedules to meet over Vidyo, which I totally appreciated! I started the meeting by stating my goal which was to reach a shared agreement about two or three concrete things we would try to do more of or less of in the future.
  • I would have loved to have had a full hour, as I felt we were just starting to surface the real issues near the end of the call. It felt a little strange to have to cut off the conversation right when we were getting into it.
  • In the short time we had, we were able to touch on what I think were probably the most salient points from the pad, and everyone had a chance to speak. We also identified four concrete things to do differently in the future. By those measures, I think the debrief was successful.
  • Some additional takeaways were shared via email after the call, and I think everyone is committed to making this the start of an ongoing process of continuous improvement.

I called this a “debrief” because it was a relatively unstructured conversation looking back at the end of a project. In my mind, a debrief is one flavor of a larger category of what I’d call “retrospecting behaviors.”

Here are some thoughts about what makes a good retrospective:

You don’t need to save retrospecting for the end. Retrospectives are different from post-mortems in this way. You can retrospect at any point during a project, and, in fact, for teams that work together consistently, retrospectives can be baked into your regular working rhythm.

First thing’s first: start with a neutral timeline. It’s amazing how much we can forget. Spend a couple minutes re-creating an objective timeline of what happened leading up to the retrospective. Use calendars, emails, blog posts, etc. to re-create the major milestones that occurred.

Bring data. If possible, the facilitator should bring data or solicit data from the team. Data can include so many things! Here are just a few examples:

  • Quantitative and qualitative measures of success.
  • Data about how long things took to finish.
  • Subjective experiences: each team member’s high point and low point. One word or phrase from each team member describing their experience.

Be ready for the awkward. For a breakthrough to happen, you often have to go through something uncomfortable first. No one should feel unsafe or attacked, of course, but transformation happens when people have the courage to speak and hear painful truths. Not every retrospective will feel like a group therapy session, but surfacing tensions in productive, solution-oriented ways is good for teams.

Despite their name, retrospectives are about the future. The outcome of any retrospective (whether it’s a team meeting, or 5 minutes of solo thinking time at your desk) should be at least one specific thing you’d like to do differently in the future. Make it visible to you and your teammates.

A “Do Differently” is a specific and immediately actionable experiment. Commit to trying something different just for a week. Because the risk is low (it’s just a week!), you can try something pretty dramatic. Choose something you can start right away.  “Let’s try using Trello for a week” or “Let’s see if having a 10-minute check-in each morning reduces confusion.”

Retrospectives often also inspire one-time actions and new rules. One-time actions are things like, “We need to do a CRM training for the team” or “We should update our list of vendors because no one knew who to call when we ran into trouble.” New rules are things like, “We should start every project with a kick-off meeting, no matter how small the project is.”

Both one-time actions and new rules are important, and should be captured and assigned a responsible person. But they are not the same as “Do Differentlys” which are meant to create a culture of experimentation that is necessary for continuous improvement.

It’s not about how well you followed a process; it’s about how well the process is serving the goals. This is another difference between retrospectives and post-mortems. Whereas in a post-mortem, you might be discussing what you did “right” and “wrong” (i.e. how well you adhered to some agreed upon rules or norms), in a retrospective you discuss what “worked” and “didn’t work” (which might lead to changing those norms).

Celebrate. Retrospectives are occasions to recognize the good as well as the bad. I won’t lie. Some of my favorite retrospectives involved cake.

What would you add to or change about the above list?


http://hannahgrams.com/2014/09/06/on-retrospectives/


Hal Wine: New Hg Server Status Page

Суббота, 06 Сентября 2014 г. 11:00 + в цитатник

New Hg Server Status Page

Just a quick note to let folks know that the Developer Services team continues to make improvements on Mozilla’s Mercurial server. We’ve set up a status page to make it easier to check on current status.

As we continue to improve monitoring and status displays, you’ll always find the “latest and greatest” on this page. And we’ll keep the page updated with recent improvements to the system. We hope this page will become your first stop whenever you have questions about our Mercurial server.

http://dtor.com/halfire/2014/09/06/hg_server_update.html


Wil Clouser: Retiring AMO’s Landfill

Суббота, 06 Сентября 2014 г. 02:08 + в цитатник

A few years ago we deployed a landfill for AMO – a place where people could play around with the site without affecting our production install and developers could easily get some data to import into their local development environment.

I think the idea was sound (it was modeled after Bugzilla’s landfill) and it was moderately successful but it never grew like I had hoped and even after being available for so long, it had very little usable data in it. It could help you get a site running, but if you wanted to, say, test pagination you’d still need to create enough objects to actually have more than one page.

A broader and more scalable alternative is a script which can make as many or as few objects in the database as you’d like. Want 500 apps? No problem. Want 10000 apps in one category so you can test how categories scale? Sure. Want those 10000 apps owned by a single user? That should be as easy as running a command line script.

That’s the theory anyway. The idea is being explored on the wiki for the Marketplace (and AMO developers are also in the discussion). If you’re interested in being involved or seeing our progress watch that wiki page or join #marketplace on irc.mozilla.org.

http://micropipes.com/blog/2014/09/05/retiring-amos-landfill/


Eric Shepherd: The Sheppy Report: September 5, 2014

Суббота, 06 Сентября 2014 г. 01:17 + в цитатник

First, a personal note:

Holy frickity-frak! It’s September!?

Okay, back to business. My work this week was all over the place. Got tons done but, of course, not what I meant to do. That said, I did actually make progress on the stuff I’d planned to do this week, so that’s something, anyway.

I love this job. The fact that I start my week expecting one awesome thing, and find myself doing two totally different awesome things instead, is pretty freaking cool.

What I did this week

  • Filed bug 1061624 about the new page editing window lacking a link to the Tagging Guide next to the tag edit area.
  • Followed up on some tweets reporting problems with MDN content; made sure the people working on that material knew about the issues at hand, and shared reassurances that we’re on the problem.
  • Tweaked the Toolbox page to mention where full-page screenshots are captured in both locations where the feature is described (instead of just the first place). Also added additional tags to the page.
  • Had a lot of discussions, both by video and by email and IRC, about planning and procedures for documentation work. A new effort is underway to come up with a standard process.
  • Submitted my proposal for changes to our documentation process to Ali, who will be collating this input from all the staff writers and producing a full proposal.
  • Checked the MDN Inbox: it was empty.
  • Experimentation with existing WebRTC examples.
  • Moved some WebRTC content to its new home on MDN.
  • Filed bug 1062538, which suggests that there be a way to close the expanded title/TOC editor on MDN, once it’s been expanded.
  • Fixed the parent page links for the older WebAPI docs; somehow they all believed their parent to be in the Polish part of MDN.
  • Corrected grammar in the article about HTMLMediaElement, and updated the page’s tags.
  • Filed a bug about search behavior in the MDN header, but it was a duplicate.
  • Discovered a privacy issue bug and filed it. A fix is already forthcoming.
  • bz told me that previewing changes to docs in the API reference results in an internal service error; I did some experimenting, then filed bug 1062856 for him. I also pinged mdn-drivers since it seems potentially serious.
  • Discovered an extant, known bug in media streaming which prevents me from determining the dimensions of the video correctly from script. This is breaking many samples for WebRTC.
  • Went through all pages with KumaScript errors (there were only 10). All but one were fixed with a shift-refresh. The last one had a typo in a macro call and worked fine after I fixed the error.
  • Expanded on Florian’s Glossary entry about endianness by adding info on common platforms and processors for each endianness.
  • Filed bug 1063560 about search results claiming to be for English only when your search was for locale=*.
  • Discovered and filed bug 1063582 about MDN edits not showing up until you refresh after saving. This had been fixed at one point but has broken again very recently.
  • Started designing a service to run on Mozilla’s PaaS platform to host the server side of MDN samples. My plan is nifty and I’ll share more about it when I’m done putting rough drafts together.
  • Extended discussions with MDN dev team about various issues and bugs.
  • Helped with the debugging of a Firefox bug I filed earlier in the week.

Meetings attended this week

Tuesday

  • #mdndev planning meeting
  • 1:1 with Jean-Yves

Wednesday

  • 1:1 meeting with Ali

Thursday

  • Writers’ staff meeting
  • Compatibility Data monthly meeting

Friday

  • #mdndev weekly review meeting
  • Web API documentation meeting; only myself, Jean-Yves, and Florian attended but it was still a viable conversation.

A good, productive week, even if it didn’t involve the stuff I expected to do. That may be my motto: I did a lot of things I didn’t expect to do.

http://www.bitstampede.com/2014/09/05/the-sheppy-report-september-5-2014/


Benoit Girard: B2G Performance Polish: Unlock Animation (Part 1)

Суббота, 06 Сентября 2014 г. 00:23 + в цитатник

I’ve decided to start a blog series documenting my workflow for performance investigation. Let me know if you find this useful and I’ll try to make this a regular thing. I’ll update this blog to track the progress made by myself, and anyone who wants to jump in and help.

I wanted to start with the b2g unlock animation. The animation is O.K. but not great and is core to the phone experience. I can notice a delay from the touch up to the start of the animation. I can notice that the animation isn’t smooth. Where do we go from here? First we need to quantity how things stand.

Measuring The Starting Point

The first thing is to grab a profile. From the profile we can extract a lot of information (we will look at it again in future parts). I run the following command:

./profile.sh start -p b2g -t GeckoMain,Compositor
*unlock the screen, wait until the end of the animation*
./profile.sh capture
*open profile_captured.sym in http://people.mozilla.org/~bgirard/cleopatra/*

This results in the following profile. I recommend that you open it and follow along. The lock animation starts from t=7673 and runs until 8656. That’s about 1 second. We can also note that we don’t see any CPU idle time so we’re working really hard during the unlock animation. Things aren’t looking great from a first glance.

I said that there was a long delay at the start of the animation. We can see a large transaction at the start near t=7673. The first composition completes at t=7873. That mean that our unlock delay is about 200ms.

Now let’s look at how the frame rate is for this animation. In the profile open the ‘Frames’ tab. You should see this (minus my overlay):

Lockscreen Frame Uniformity

Alright so our starting point is:

Unlock delay: 200ms

Frame Uniformity: ~25 FPS, poor uniformity

Next step

In part 2 we’re going to discuss the ideal implementation for a lock screen. This is useful because we established a starting point in part 1, part 2 will establish a destination.

 


http://benoitgirard.wordpress.com/2014/09/05/b2g-performance-polish-unlock-animation-part-1/


Gregory Szorc: Reproducing Mozilla's Mercurial Server

Пятница, 05 Сентября 2014 г. 18:50 + в цитатник

Of of my first tasks in my new role as a Developer Productivity Engineer is to help make Mozilla's Mercurial server better. Many of the awesome things we have planned rely on features in newer versions of Mercurial. It's therefore important for us to upgrade our Mercurial server to a modern version (we are currently running 2.5.4) and to keep our Mercurial server upgraded as time passes.

There are a few reasons why we haven't historically upgraded our Mercurial server. First, as anyone who has maintained high-availability systems will tell you, there is the attitude of if it isn't broken, don't fix it. In other words, Mercurial 2.5.4 is working fine, so why mess with a good thing. This was all fine and dandy - until Mercurial started falling over in the last few weeks.

But the blocker towards upgrading that I want to talk about today is systems verification. There has been extreme caution around upgrading Mercurial at Mozilla because it is a critical piece of Mozilla's infrastructure and if the upgrade were to not go well, the outage would be disastrous for developer productivity and could even jeopardize an emergency Firefox release.

As much as I'd like to say that a modern version of Mercurial on the server would be a drop-in replacement (Mercurial has a great committment to backwards compatibility and has loose coupling between clients and servers such that upgrading servers should not impact clients), there is always a risk that something will change. And that risk is compounded by the amount of custom code we have running on our server.

The way you protect against unexpected changes is testing. In the ideal world, you have a robust test suite that you run against a staging instance of a service to validate that any changes have no impact. In the absence of testing, you are left with fear, uncertainty, and doubt. FUD is an especially horrible philosophy when it comes to managing servers.

Unfortunately, we don't really have a great testing infrastructure for Mozilla's Mercurial server. And I want to change that.

Reproducing the Server Environment

When writing tests, it is important for the thing being tested to be as similar as possible to the real thing. This is why so many people have an aversion to mocking: every time you alter the test environment, you run the risk that those differences from reality will mask changes seen in the real environment.

So, it makes sense that a good first goal for creating a test suite against our Mercurial server should be to reproduce the production server and environment as closely as possible.

I'm currently working on a Vagrant environment that attempts to reproduce the official environment as closely as possible. It starts one virtual machine for the SSH/master server. It starts a separate virtual machine for the hgweb/slave servers. The virtual machines are booting CentOS. This is different than production, where we run RHEL. But they are similar enough (and can share the same packages) that the differences shouldn't matter too much, at least for now.

Using Puppet

In production, Mozilla is using Puppet to manage the Mercurial servers. Unfortunately, the actual Puppet configs that Mozilla is running are behind a firewall, mainly for security reasons. This is potentially a huge setback for my reproducibility effort, as I'd like to have my virtual machines use the same exact Puppet configs as whats used in production so the environments match as closely as possible. This would also save me a lot of work from having to reinvent the wheel.

Fortunately, Ben Kero has extracted the Mercurial-relevant Puppet config files into a standalone repository. Apparently that repository gets rolled into the production Puppet configs periodically. So, my virtual machines and production can share the same Mercurial Puppet files. Nice!

It wasn't long after starting to use the standalone Puppet configs that I realized this would be a rabbit hole. This first manifests in the standalone Puppet code referencing things that exist in the hidden Mozilla Puppet files. So the liberation was only partially successful. Sad panda.

So, I'm now in the process of creating a fake Mozilla Puppet environment that mimics the base Mozilla environment (from the closed repo) and am modifying the shared Puppet Mercurial code to work with both versions. This is a royal pain, but it needs to be done if we want to reproduce production and maintain peace of mind that test results reflect reality.

Because reproducing runtime environments is important for reproducing and solving bugs and for testing, I call on the maintainers of Mozilla's closed Puppet repository to liberate it from behind its firewall. I'd like to see a public Puppet configuration tree available for all to use so that anyone anywhere can reproduce the state of a server or service operated by Mozilla to within reasonable approximation. Had this already been done, it would have saved me hours of work. As it stands, I'm reverse engineering systems and trying to cobble together understanding of how the Mozilla Puppet configs work and what parts of them can safely be ignored to reproduce an approximate testing environment.

Along that vein, I finally got access to Mozilla's internal Puppet repository. This took a few meetings and apparently a lot of backroom chatter was generated - "developer's don't normally get access, oh my!" All I wanted was to see how systems are configured so I can help improve them. Instead, getting access felt like pulling teeth. This feels like a major roadblock towards productivity, reproducibility, and testing.

Facebook gives its developers access to most production machines and trusts them to not be stupid. I know we (Mozilla) like to hold ourselves to a high standard of security and privacy. But not giving developers access to the configurations for the systems their code runs on feels like a very silly policy. I hope Mozilla invests in opening up this important code and data, if not to the world, at least to its trusted employees.

Anyway, hopefully I'll soon have a Vagrant environment that allows people to build a standalone instance of Mozilla's Mercurial server. And once that's in place, I can start writing tests that basic services and workflows (including repository syncing) work as expected. Stay tuned.

http://gregoryszorc.com/blog/2014/09/05/reproducing-mozilla's-mercurial-server


Christian Heilmann: Coldfrontconf is one to watch

Пятница, 05 Сентября 2014 г. 15:40 + в цитатник

I’ve said it before and I stick by it: conferences stand and fall with the enthusiasm of the organisers. And it is a joy for someone like me who does spend a lot of time at conferences to see a new one be a massive success from the get-go.

Yesterday was the Coldfront conference in Copenhagen, Denmark. A one day conference organised by Kenneth Auchenberg, @Danielovich (and of course a well-chosen team of people). It was very rewarding to work with him to give the closing keynote of the inaugural edition of this event.

The slides of my closing keynotes are available on Slideshare.

And, amazingly enough, the video is out, too:


Chris alt Coldfrontconf
(Notice the fan behind me giving me that wind-swept look that so fitted my physical state going directly from the plane to the venue)

I am sad that because of other commitments I had to miss the first talks, but here are my main impressions of the event:

  • I love the pragmatism of it – one track, good break times, a very simple and straight-forward web site and no push to “download the app of this event”.
  • The location – a program cinema – had great seating, working WiFi (with a few hickups but the hotel next door also had available WiFi that worked in the first rows) and very adequate facilities.
  • The projector and audio set up was great and the switch from speaker to speaker worked flawlessly.
  • All talks were streamed on the web
  • Even a last minute speaker cancellation didn’t quite disturb the event (thanks for the reminder Steen H. Rasmussen)
  • Instead of keeping people perched up inside, the breaks had coffee available for self-service and the food and branded ice cream was served outside the building in the street. This was also the spot for the beers and cupcakes after the event and the final venue was just down the road.
  • The after party was in a beer place that has over 40 beers on tab and the open bar lasted well till after midnight. Nobody got blindly drunk or misbehaved – it actually felt more like a beer tasting experience than a drink-up. There was a lot of seating and no loud music to discourage or hinder communication after party
  • All the videos of the talks were already available on the day or the day after. I managed to see myself whilst my head was still hurting from the party (and my lack of sleep) the night before.
  • Elisabeth Irgens did a great job doing live sketch notes of each talk and uploading them immediately to Twitter.
  • The audience was very well behaved and it was a very inviting and inspiring environment to share information in. Good mix of people with various backgrounds.
  • Whilst there was a bit of sponsorship being shown on the big screen and there were sponsor booths in the foyer all of it was very low-key and appeared utterly in context. No sales weasels or booth babes there. The sponsors sent their geeks to talk to geeks.
  • I felt very well looked after – the organisers paid my flights and hotel and the communication with the speakers as to where to be when was only a handful of emails. Things just fell in place and there was no hesitance to make sure everybody gets there in time.
  • It is very worth while to watch the recordings of the talk. All of them were very high quality. Personally, I was most impressed with Guillermo Rauch“’s How to build the modern, optimistic and reactive user interface, we all want.

All in all, this was a conference that was as pragmatic and spot-on as Kenneth is when you talk to him. It felt very good and I was very much reminded of the first Fronteers event. This is one to watch, let’s see what happens next.

http://christianheilmann.com/2014/09/05/coldfrontconf-is-one-to-watch/


Gregory Szorc: New Job Role

Пятница, 05 Сентября 2014 г. 15:30 + в цитатник

As of today, I have a new role and title at Mozilla: Developer Productivity Engineer. I'll be reporting to Laura Thomson as a member of the Developer Services team.

I have an immediate goal to make our version control work better. This includes making Try scale and helping out with the deployment of ReviewBoard. After that, I'm not entirely sure. But Autoland and Firefox build system improvements have been discussed.

I'm really excited to be in this new role. If someone were to give me a clean slate and tell me to design my own job role, I think I'd answer with something very similar to the role I am now in. I am passionate about tools and enabling people to become more productive. I have little doubt I'll thrive in this new role.

http://gregoryszorc.com/blog/2014/09/05/new-job-role


Christian Heilmann: Firefox OS auf der MobileTechCon Berlin 2014

Пятница, 05 Сентября 2014 г. 13:42 + в цитатник

Vor zwei Tagen war ich in Berlin auf der MobileTechCon und hielt neben der Er"offnungskeynote am zweiten Tag auch einen Vortrag "uber den aktuellen Stand von Firefox OS.

Gesch

Da das Publikum den Vortrag gerne auf Deutsch haben wollte, hatte ich kurzfristig umgeschwenkt und ihn dann auf sowas wie Deutsch gehalten.

Hier sind die Slides und die Screencasts. Der erste ist nur vom Vortrag, der zweite beinhaltet auch die Fragen und Antworten mit ein paar Beispielen wie man zum Beispiel die Developer Tools im Firefox verwenden kann, was together.js ist und wozu das gut ist und ein paar weitere “Schmankerln des offenen Netzes”.

Das alles is sehr ungeschnitten und war mehr oder minder im Moment ge"andert, daher kann es sein das da auch ungezogene Worte mit dabei sind. Die Slides sind auf Slideshare erh"altlich.

Den halbst"undigen Vortrag gibt es hier als Screencast zu sehen:

Wer den ganzen Vortrag mit Fragen und Antworten h"oren will, gibt es hier die ganze Stunde als Screencast.

http://christianheilmann.com/2014/09/05/firefox-os-auf-der-mobiletechcon-berlin-2014/



Поиск сообщений в rss_planet_mozilla
Страницы: 472 ... 77 76 [75] 74 73 ..
.. 1 Календарь