-Поиск по дневнику

Поиск сообщений в rss_planet_mozilla

 -Подписка по e-mail

 

 -Постоянные читатели

 -Статистика

Статистика LiveInternet.ru: показано количество хитов и посетителей
Создан: 19.06.2007
Записей:
Комментариев:
Написано: 7

Planet Mozilla





Planet Mozilla - https://planet.mozilla.org/


Добавить любой RSS - источник (включая журнал LiveJournal) в свою ленту друзей вы можете на странице синдикации.

Исходная информация - http://planet.mozilla.org/.
Данный дневник сформирован из открытого RSS-источника по адресу http://planet.mozilla.org/rss20.xml, и дополняется в соответствии с дополнением данного источника. Он может не соответствовать содержимому оригинальной страницы. Трансляция создана автоматически по запросу читателей этой RSS ленты.
По всем вопросам о работе данного сервиса обращаться со страницы контактной информации.

[Обновить трансляцию]

Air Mozilla: Reps weekly

Четверг, 29 Января 2015 г. 19:00 + в цитатник

Gervase Markham: Samuel David Markham

Четверг, 29 Января 2015 г. 16:40 + в цитатник

I am overjoyed to announce the birth of our third son, Samuel David Markham, at 9.01am on the morning of 28th January 2015, weighing 8lb 0oz. Mother, father, baby and older brothers are all well :-)

He is called Samuel after:

  • The prophet and leader Samuel, who was called into God’s service at an early age, as recorded in the book of 1 Samuel;
  • Samuel Rutherford (1600 – 1661), a Scottish minister and representative at the Westminster Assembly, whose book Lex, Rex contains arguments foundational to a Christian understanding of good government;
  • Samuel Davies (1723 – 1761), American non-conformist preacher, evangelist and hymn writer, who showed we are all equal in God’s sight by welcoming black and white, slave and free to the same Communion table;
  • Samuel Crowther (1809 – 1891), the first black Anglican bishop in Africa, who persevered against unjust opposition and translated the Bible into Yoruba.

He is called David primarily after the King David in the Bible, who was “a man after God’s own heart” (a fact recorded in the book of 1 Samuel, 13:14).

http://feedproxy.google.com/~r/HackingForChrist/~3/7WieoLykcFQ/


Doug Belshaw: Is grunt 'n' click getting in the way of web literacy?

Четверг, 29 Января 2015 г. 16:40 + в цитатник

I’ve just listening to a fascinating episode of the 99% Invisible podcast. It’s Episode 149: Of Mice and Men and its subject is my much-more-talented namesake Doug Englebart.

Cat and cursor

The episode covered The Mother of All Demos, after which Steve Jobs took some of Englebart’s ideas and ran with them. However, instead of the three-buttoned mouse and ‘keyset’ originally envisioned, we got the single-button Apple mouse. The MacBook Pro I’m typing on this retains this legacy: if I want to 'right-click’ I have to hold down the Option key.

Christina Englebart explained her father believed that simplicity only gets you so far. We may be in an age where toddlers can intuitively use iPads and smartphones but a relentless focus on this has led to a ceiling on the average user’s technical skills. The analogy used was the difference between a tricycle and a bicycle. Anyone can immediately get on a tricyle and use it. That’s great. But sometimes, and especially when you’re going up a hill, you need gears and the features that bicycles afford.

Essentially what we’ve got now are faster horses pimped-out tricyles which are shiny terminals to web-based services. Cory Doctorow calls this the civil war over general purpose computing:

We don’t know how to make a computer that can run all the programs we can compile except for whichever one pisses off a regulator, or disrupts a business model, or abets a criminal.

The closest approximation we have for such a device is a computer with spyware on it— a computer that, if you do the wrong thing, can intercede and say, “I can’t let you do that, Dave.”

This is paternalism at its worst. The experience of a whole generation of young people is digital surveillance baked into the tools they’re encouraged to use to learn and be 'social’. Meanwhile, their physical boundaries are more constrained than ever before 'for their own safety’.

Extremely simple user interfaces are great, but not when they’re the whole story. I don’t want a device that’s hard to use. But nor do I want one that’s so locked down that I have to bend my hopes, wishes and dreams to the will of a company providing software with shareholders. To extend the tricycle analogy, we grow up and stop using Fisher-Price toys at some point. We learn to use the real deal. This may take time and effort to learn, but the payoff is empowerment, human flourishing, and innovation.

I’ve had this experience learning Git recently. Ostensibly it’s a method of version control for software but the whole experience allows you to think in different ways about what’s possible in the (digital) world. While not quite grunt 'n’ click, GitHub makes this easier for those new to the whole process. It’s got a relatively low floor but a pretty high ceiling. That, I think, is what we should be aiming for.


Questions? Comments? Tweet me: @dajbelshaw or email me: doug@mozillafoundation.org

http://literaci.es/grunt-n-click


Christian Heilmann: On towards my next challenge…

Четверг, 29 Января 2015 г. 13:36 + в цитатник

One of the things I like where I live in London is my doctor. He is this crazy German who lived for 30 years in Australia before coming to England. During my checkup he speaks a mixture of German, Yiddish, Latin and English and has a dark sense of humour.

He also does one thing that I have massive respect for: he never treats my symptoms and gives me quick acting remedies. Instead he digs and analyses until he found the cause of them and treats that instead. Often this means I suffer longer from ill effects, but it also means that I leave with the knowledge of what caused the problem. I don’t just get products pushed in my face to make things easier and promises of a quick recovery. Instead, I get pulled in to become part of the healing process and to own my own health in the long run. This is a bad business decision for him as giving me more short-acting medication would mean I come back more often. What it means to me though is that I have massive respect for him, as he has principles.

Professor X to Magneto: If you know you can deflect it, you are not really challenging yourself

As you know if you read this blog, I left Mozilla and I am looking for a new job. As you can guess, I didn’t have to look long and hard to find one. We are superbly, almost perversely lucky as people in IT right now. We can get jobs easily compared to other experts and markets.

Hard to replace: Mozilla

Leaving Mozilla was ridiculously hard for me. I came to Mozilla to stay. I loved meeting people during my interviews who’ve been there for more than eight years – an eternity in our market. I loved the products, I am still madly in love with the manifesto and the work that is going on. A lot changed in the company though and the people I came for left one by one and got replaced by people with other ideals and ideas. This can be a good thing. Maybe this is the change the company needs.
I didn’t want to give up on these ideals. I didn’t want to take a different job where I have to promote a product by all means. I didn’t want to be in a place that does a lot of research, builds impressive looking tools and solutions but also discards them when they aren’t an immediate success. I wanted to find a place where I can serve the web and make a difference in the lives of those who build things for the web. In day to day products. Not in a monolithic product that tries to be the web.

Boredom on the bleeding edge

My presentations in the last year had a re-occuring theme. We can not improve the web if we don’t make it easy and reliable for everybody building things on it to take part in our innovations. Only a tiny amount of the workers of the web can use alpha or beta solutions. Even fewer can rely on setting switches in browsers or use prefixed functionality that might change on a whim. Many of us have to deliver products that are much more strict. Products that have to adhere to non-sensical legal requirements.

These people have a busy job, and they want to make it work. Often they have to cut corners. In many other cases they rely on massive libraries and frameworks that promise magical solutions. This leads to products that are in a working state, and not in an enjoyable state.

To make the web a continuous success, we need to clean it up. No innovation and no framework will replace the web, its very nature makes that impossible. What we need to do now is bring our bleeding edge knowledge of what means well performing and maintainable code down the line to those who do not attend every conference or spend days on Hacker News and Stack Overflow.

And the winner is…

This is why I answered a job offer I got and I will start working on the 2nd of February for a new company. The company is *drumroll*:

Microsoft. Yes, the bane of my existence as a standards protagonist during the dark days of the first browser wars. Just like my doctor, I am going to the source of a lot of our annoyances and will do my best to change the causes instead of fighting the symptoms.

My new title is “Senior Program Manager” in the Developer experience and evangelism org of Microsoft. My focus is developer outreach. I will do the same things I did at Mozilla: JavaScript, Open Web Technologies and cross-browser support.

Professor X to Magneto: you know, I've always found the real focus to be in between rage and serenity

Frankly, I am tired of people using “But what about Internet Explorer” as an excuse. An excuse to not embrace or even look into newer, sensible and necessary technology. At the same time I am bored of people praising experimental technology in “modern browsers” as a must. Best practices have to prove themselves in real products with real users. What’s great for building Facebook or GMail doesn’t automatically apply to any product on the web. If we want a standards-based web to survive and be the go-to solution for new developers we need to change. We can not only invent, we also need to clean up and move outdated technology forward. Loving and supporting the web should be unconditional. Warts and all.

A larger audience who needs change…

I’ve been asking for more outreach from the web people “in the know” to enterprise users. I talked about a lack of compassion for people who have to build “boring” web solutions. Now I am taking a step and will do my best to try to crack that problem. I want to have a beautiful web in all products I use, not only new ones. I’ve used enough terrible ticketing systems and airline web sites. It is time to help brush up the web people have to use rather than want to use.

This is one thing many get wrong. People don’t use Chrome over Firefox or other browsers because of technology features. These are more or less on par with another. They chose Chrome as it gives them a better experience with Google’s services. Browsers in 2015 are not only about what they can do for developers. It is more important how smooth they are for end users and how well they interact with the OS.

I’ve been annoyed for quite a while about the Mac/Chrome centric focus of the web development community. Yes, I use them both and I am as much to blame as the next person. Fact is though, that there are millions of users of Windows out there. There are a lot of developers who use Windows, too. The reason is that that’s what their company provides them with and that is what their clients use. It is arrogant and elitist to say we change the web and make the lives of every developer better when our tools are only available for a certain platform. That’s not the world I see when I travel outside London or the Silicon Valley.

We’re in a time of change as Alex Wilhelm put it on Twitter:

Microsoft is kinda cool again, Apple is boring, Google is going after Yahoo on Firefox, and calculator watches are back. Wtf is going on.

What made me choose

In addition to me pushing myself to fix one of the adoption problems of new web technologies from within there were a few more factors that made this decision the right one for me:

  • The people – my manager is my good friend and former Ajaxian co-writer Rey Bango. You might remember us from the FoxIE HTML5 training video series. Rey and me on the beach My first colleague in this team is also someone who I worked with on several open projects and got massive respect for.
  • The flexibility – I am a remote worker. I work from London, Stockholm, or wherever I need to be. This is becoming a rare occasion and many offers I got started with “you need to move to the Silicon Valley”. Nope, I work on the web. We all could. I have my own travel budget. I propose where I am going to present. And I will be in charge of defining, delivering and measuring the developer outreach program.
  • The respect – every person who interviewed me was on time, prepared and had real, job related problems for me to solve. There was no “let me impress you with my knowledge and ask you a thing you don’t know”. There was no “Oh, I forgot I needed to interview you right now” and there was no confusion about who I am speaking to and about what. I loved this. and in comparison to other offers it was refreshing. We all are open about our actions and achievements as developers. If you interview someone you have no clue about then you didn’t do your research as an interviewer. I interviewed people as much as they interviewed me and the answers I got were enlightening. There was no exaggerated promises and they scrutinised everything I said and discussed it. When I made a mistake, I got a question about it instead of letting me show off or talk myself into a corner.
  • The organisation – there is no question about how much I earn, what the career prospects are, how my travels and expenses will work and what benefits I get. These things are products in use and not a Wiki page where you could contact people if you want to know more.
  • The challenge – the Windows 10 announcement was interesting. Microsoft is jumping over its own shadow with the recent changes, and I am intrigued about making this work.

I know there might be a lot of questions, so feel free to contact me if you have any concerns, or if you want to congratulate me.

FAQ:

  • Does this mean you will not be participating in open communication channels and open source any longer?

    On the contrary. I am hired to help Microsoft understand the open source world and play a bigger part in it. I will have to keep some secrets until a certain time. That will also be a re-occuring happening. But that is much the same for anyone working in Google, Samsung, Apple or any other company. Even Mozilla has secrets now, this is just how some markets work. I will keep writing for other channels, I will write MDN docs, be active on GitHub and applaud in public when other people do great work. I will also be available to promote your work. All I publish will be open source or Creative Commons. This blog will not be a Microsoft blog. This is still me and will always remain me.

  • Will you now move to Windows with all your computing needs?

    No, like every other person who defends open technology and freedom I will keep using my Macintosh. (There might be sarcasm in this sentence, and a bit of guilt). I will also keep my Android and Firefox OS phones. I will of course get Windows based machinery as I am working on making those better for the web.

  • Will you now help me fix all my Internet Explorer problems when I complain loud enough on Twitter?

    No.

  • What does this mean to your speaking engagements and other public work?

    Not much. I keep picking where I think my work is needed the most and my terms and conditions stay the same. I will keep a strict separation of sponsoring events and presenting there (no pay to play). I am not sure how sponsorship requests work, but I will find out soon and forward requests to the right people.

  • What about your evangelism coaching work like the Mozilla Evangelism Reps and various other groups on Facebook and LinkedIn?

    I will keep my work up there. Except that the Evangelism Reps should have a Mozilla owner. I am not sure what should happen to this group given the changes in the company structure. I’ve tried for years to keep this independent of me and running smoothly without my guidance and interference. Let’s hope it works out.

  • Will you tell us now to switch to Internet Explorer / Spartan?

    No. I will keep telling you to support standards and write code that is independent of browser and environment. This is not about peddling a product. This is about helping Microsoft to do the right thing and giving them a voice that has a good track record in that regard.

  • Can we now get endless free Windows hardware from you to test on?

    No. Most likely not. Again, sensible requests I could probably forward to the right people

So that’s that. Now let’s fix the web from within and move a bulk of developers forward instead of preaching from an ivory tower of “here is how things should be”.

http://christianheilmann.com/2015/01/29/on-towards-my-next-challenge-2/


Dave Townsend: hgchanges is down for now

Четверг, 29 Января 2015 г. 08:23 + в цитатник

My handy tool for tracking changes to directories in the mozilla mercurial repositories is going to be broken for a little while. Unfortunately a particular changeset seems to be breaking things in ways I don’t have the time to fix right now. Specifically trying to download the raw patch for the changeset is causing hgweb to timeout. Short of finding time to debug and fix the problem my only solution is to wait until that patch is old enough that it no longer attempts to index it. That could take a week or so.

Obviously I’ll happily accept patches to fix this problem sooner.

http://www.oxymoronical.com/blog/2015/01/hgchanges-is-down-for-now


Karl Dubost: Hired in Taipei? Cool Intros.

Четверг, 29 Января 2015 г. 07:53 + в цитатник

Mozilla has Weekly Updates, a public meeting, giving news about projects and events. It's also the time for introducing the newly hired employees across the organization. The meeting is also available as a video.

Taipei office has the same issue than us working from Japan. The meeting is scheduled at 11am San Francisco time. It means 4am in Japan and 3am in Taiwan. Luckily enough, there is a record. So when the Mozilla Taipei office in Taiwan needs to introduce a new employee, instead of being awake at 3am, they record a video of their new hires. Go to 40m55s into this week update. For example, Ceci Chang, a Firefox OS Visual Designer, is explaining how to grow more mint.

Ceci Chang in front of Mint

Introductions of new hired employees to the rest of our distributed organization do not have to be boring. Taipei office is making it a very « feel good » thing. Thanks.

Otsukare.

http://www.otsukare.info/2015/01/29/taipei-new-employees


Mozilla Fundraising: Year-End Campaign in Review: What We Learned

Четверг, 29 Января 2015 г. 04:03 + в цитатник
Ending a campaign of this magnitude is like stepping off a roller coaster. I’m mildly disoriented, exhausted, but also ecstatic that we far exceeded the goals we set. In retrospect, we learned a lot. What we do next with what … Continue reading

https://fundraising.mozilla.org/year-end-campaign-in-review-what-we-learned/


Air Mozilla: Privacy Lab - a meetup for privacy minded people in San Francisco

Четверг, 29 Января 2015 г. 02:30 + в цитатник

Privacy Lab - a meetup for privacy minded people in San Francisco Brings together privacy professionals and others interested in privacy at for-profits, non-profits, and NGOs in an effort to contribute to the state of the ecosystem...

https://air.mozilla.org/privacy-lab-a-meetup-for-privacy-minded-people-in-san-francisco/


Ben Kero: Working around flaky internet connections

Четверг, 29 Января 2015 г. 01:36 + в цитатник

In many parts of the world a reliable internet connection is hard to come by. Ethernet is essentially non-existent and WiFi is king. As any digital nomad can testify, this is ‘good enough’ for doing productive work.

Unfortunately not all WiFi connections work perfectly all the time. They’re fraught with unexpected problems including dropping out entirely, abruptly killing connections, and running into connection limits.

Thankfully with a little knowledge it is possible to regain productivity that would otherwise be lost to a flaky internet connection. These techniques are applicable to coffee shops, hotels, and other places with semi-public WiFi.

Always have a backup connection

Depending on a WiFi connections as your sole source of connectivity is a losing proposition. If all you’re doing are optional tasks it can work, although critical tasks demand a backup source should the primary fail.

This usually takes the shape of a cellular data connection. I USB or WiFi tether my laptop to my cell phone. This is straightforward in your home country, where you have a reliable data connection already. If working from another country it is advisable to get a local prepaid SIM card with data plan. These are usually inexpensive and never require a contract. Almost all Android devices support this behavior already.

If you’re too lazy to get a local SIM card, or are not in a country long enough to benefit from one (I usually use 1 full week as the cutoff), T-Mobile US’s post-paid plans offer roaming data in other countries. This is only EDGE (2.5G) connectivity, but is still entirely usable if you’re careful and patient with the connection.

Reducing background data

Some of the major applications that you’re using do updates in the background, including Firefox and Chrome. They can detect that your computer is connected to an internet connection, and will attempt to do updates anytime. Obviously if you’re using a connection with limited bandwidth, this can ruin the experience for everybody (including yourself).

You can disable this feature in Firefox by navigating to Edit -> Preferences -> Advanced -> Update, and switching Firefox Updates to Never check for updates.

Your operating system might do this as well, so it is worth investigating so you can disable it.

Mosh: The Mobile Shell

If you’re a command-line junkie or a keyboard cowboy, you’ll usually spend a lot of time SSHing into other servers. Mosh is an application like SSH that is specifically designed for unreliable connections. It allows some conveniences like resume-after-sleep even if your connection changes, and local echo so that you can see/revise your input even if the other side is non-responsive. There are some known security concerns with using Mosh, so I’ll leave it as an exercise to the reader if they feel safe using it.

It should be noted that with proper configuration, OpenSSH can also gain some of this resiliency.

Tunneling

Often the small wireless routers you’re connecting to are not configured to handle the load of several people. One symptom of this is the TCP connection limit. The result of the router hitting this limit is that you will no longer be able to establish new TCP connections until one is closed. The way around this is to use a tunnel.

The simplest method to do this is a SOCKS proxy. A SOCKS proxy is a small piece of software that runs on your laptop. Its purpose is to tunnel new connections through an existing connection. The way I use it is by establishing a connection to my colocated server in Portland, OR, then tunneling all my connections through that. The server is easy to set up.

The simplest way to do this is with SSH. To use it, simply open up a terminal and type the following command (replacing my host name with your own)

$ ssh -v -N -D1080 bke.ro

This will open a tunnel between your laptop and the remote host. You’re not done yet though. The next part is telling your software to use the tunnel. In Firefox this can be done in Edit -> Preferences -> Advanced -> Network -> Connection Settings -> Manual Proxy Configuration -> SOCKS Host. You’ll also want to check “Remote DNS” below. You can test this is working by visiting a web site such as whatismyip.com.

Command-line applications can use a SOCKS proxy by using the program called tsocks. Tsocks will transparently tunnel the connections of your command-line applications through your proxy. It is invoked like this:

$ tsocks curl http://bke.ro/

Some other methods of tunneling that have been used successfully include real VPN software such as OpenVPN. There is an entire market of OpenVPN providers available that will give you access to endpoints in many countries. You can also just run this yourself.

An alternative to that is sshuttle. This uses iptables on Linux (and the built-in firewall on OS X) to transparently tunnel connections over a SSH session.All system connections will transparently be routed through it. One cool feature of this approach is that no special software needs to be installed on the remote side. This means that it’s easy to use with several different hosts.

Local caching

Some content can be cached and reused without having to hit the Internet. This isn’t perfect, but reducing the amount of network traffic should result in less burden on the network and faster page-load times. There are a couple pieces of software that can help achieve this.

Unbound is a local DNS caching daemon. It runs on your computer and listens for applications to make DNS requests. It then asks the internet for the answer, and caches that. This results in less DNS queries hitting the internet, which reduces network burden and theoretically loads pages faster. I’ve been subjecting Unbound to constant daily use for 6 months, and have not attributed a single problem to it. Available in a distro near you.

Polipo is a local caching web proxy. This is a small daemon that runs on your computer and transparently caches web content. This can speed up page load times and reduce amount of network traffic done. It has a knob to tune the cache size, and you can empty the cache whenever you want. Again, this should be available in any major package manager.

Ad blocking software

Privoxy is a web proxy that filters out unwanted content, such as tracking cookies, advertisements, social-media iframes, and other “obnoxious internet junk”. It can be used in conjunction with polipo, and there is even a mention in the docs about how to layer them.

SomeoneWhoCares Hosts File is an alternative /etc/hosts file that promises “to make the internet not suck (as much)”. This replaces your /etc/hosts file, which is used before DNS queries are made. This particular /etc/hosts file simply resolves many bad domains to ‘127.0.0.1’ instead of their real address. This blocks many joke sites (goatse, etc) as well as ad servers. I’ve used this for a long time and have never encountered a problem associated with it.

AdBlock Plus might be a Firefox extension you’re familiar with it. It is a popular extension that removes ads from web pages, which should save you bandwidth, page load speed, and battery life. AdBlock Plus is a heavy memory user, so if you’re on a device with limited memory (< 4GB) it might be worth considering an alternate ad blocking extension.

Second browser instance (that doesn’t use any of the aforementioned)

As great as these pieces are, sometimes you’ll encounter a problem. At that point it could be advantageous to have a separate browser instance to access the internet “unadulterated”. This will let you know if the problem is on your side, the remote host, or your connection.

I hope that using these techniques will help you have a better experience while using questionable connections. It’s an ongoing struggle, but the state of connectivity is getting better. Hopefully one day these measures will be unnecessary.

Please leave a comment if you learned something from reading this, or notice anything I missed.

http://bke.ro/working-around-flaky-internet-connections/


Benoit Girard: Testing a JS WebApp

Четверг, 29 Января 2015 г. 00:13 + в цитатник

Test Requirements

I’ve been putting off testing my cleopatra project for a while now https://github.com/bgirard/cleopatra because I wanted to take the time to find a solution that would satisfy the following:

  1. The tests can be executed by running a particularly URL.
  2. The tests can be executed headless using a script.
  3. No server side component or proxy is required.
  4. Stretch goal: Continuous integration tests.

After a bit of research I came up with a solution that addressed my requirements. I’m sharing here in case this helps others.

First I found that the easiest way to achieve this is to find a Test Framework to get 1) and find a solution to run a headless browser for 3.

Picking a test framework

For the Test Framework I picked QUnit. I didn’t have any strong requirements there so you may want to review your options if you do. With QUnit I load my page in an iframe and inspect the resulting document after performing operations. Here’s an example:

QUnit.test("Select Filter", function(assert) {
  loadCleopatra({
    query: "?report=4c013822c9b91ffdebfbe6b9ef300adec6d5a99f&select=200,400",
    assert: assert,
    testFunc: function(cleopatraObj) {
    },
    profileLoadFunc: function(cleopatraObj) {
    },
    updatedFiltersFunc: function(cleopatraObj) {
      var samples = shownSamples(cleopatraObj);

      // Sample count for one of the two threads in the profile are both 150
      assert.ok(samples === 150, "Loaded profile");
    }
  });
});

Here I just load a profile, and once the document fires an updateFilters event I check that the right number of samples are selected.

You can run the latest cleopatra test here: http://people.mozilla.org/~bgirard/cleopatra/test.html

Picking a browser (test) driver

Now that we have a page that can run our test suite we just need a way to automate the execution. Turns out that PhantomJS, for webkit, and SlimerJS, for Gecko, provides exactly this. With a small driver script we can load our test.html page and set the process return code based on the result of our test framework, QUnit in this case.

Stretch goal: Continuous integration

If you hooked up the browser driver to run via a simple test.sh script adding continuous integration should be simple. Thanks to Travis-CI and Github it’s easy to setup your test script to run per check-in and set notifications.

All you need is to configure Travis-CI to looks at your repo and to check-in an appropriate .travis.cml config file. Your travis.yml should configure the environment. PhantomJS is pre-installed and should just work. SlimerJS requires a Firefox binary and a virtual display so it just requires a few more configuration lines. Here’s the final configuration:

env:
  - SLIMERJSLAUNCHER=$(which firefox) DISPLAY=:99.0 PATH=$TRAVIS_BUILD_DIR/slimerjs:$PATH
addons:
  firefox: "33.1"
before_script:
  - "sh -e /etc/init.d/xvfb start"
  - "echo 'Installing Slimer'"
  - "wget http://download.slimerjs.org/releases/0.9.4/slimerjs-0.9.4.zip"
  - "unzip slimerjs-0.9.4.zip"
  - "mv slimerjs-0.9.4 ./slimerjs"

notifications:
  irc:
    channels:
      - "irc.mozilla.org#perf"
    template:
     - "BenWa: %{repository} (%{commit}) : %{message} %{build_url}"
    on_success: change
    on_failure: change

script: phantomjs js/tests/run_qunit.js test.html && ./slimerjs/slimerjs js/tests/run_qunit.js $PWD/test.html

Happy testing!


https://benoitgirard.wordpress.com/2015/01/28/testing-a-js-webapp/


Air Mozilla: Product Coordination Meeting

Среда, 28 Января 2015 г. 22:00 + в цитатник

Product Coordination Meeting Weekly coordination meeting for Firefox Desktop & Android product planning between Marketing/PR, Engineering, Release Scheduling, and Support.

https://air.mozilla.org/product-coordination-meeting-20150128/


Ben Hearsum: Signing Software at Scale

Среда, 28 Января 2015 г. 19:45 + в цитатник

Mozilla produces a lot of builds. We build Firefox for somewhere between 5 to 10 platforms (depending how you count). We release Nightly and Aurora every single day, Beta twice a week, and Release and ESR every 6 weeks (at least). Each release contains an en-US build and nearly a hundred localized repacks. In the past the only builds we signed were Betas (which were once a week at the time), Releases, and ESRs. We had a pretty well established manual for it, but due to being manual it was still error prone and impractical to use for Nightly and Aurora. Signing of Nightly and Aurora became an important issue when background updates were implemented because one of the new security requirements with background updates was signed installers and MARs.

Enter: Signing Server

At this point it was clear that the only practical way to sign all the builds that we need to is to automate it. It sounded crazy to me at first. How can you automate something that depends on secret keys, passphrases, and very unfriendly tools? Well, there's some tricks you need to know, and throughout the development and improvement of our "signing server", we've learned a lot. In the post I'll talk about those tricks and show you how can use them (or even our entire signing server!) to make your signing process faster and easier.

Credit where credit is due: Chris AtLee wrote the core of the signing server and support for some of the signature types. Over time Erick Dransch, Justin Wood, Dustin Mitchell, and I have made some improvements and added support for additional types of signatures.


Tip #1: Collect passphrases at startup

This should be obvious to most, but it's very important not to store the passphrases to your private keys unencrypted. However, because they're needed to unlock the private keys when doing any signing the server needs to have access to them somehow. We've dealt with this by asking for them when launching a signing server instance:

$ bin/python tools/release/signing/signing-server.py signing.ini
gpg passphrase: 
signcode passphrase: 
mar passphrase: 

Because instances are started manually by someone in the small set of people with access to passphrases we're able to ensure that keys are never left unencrypted at rest.

Tip #2: Don't let just any machine request signed files

One of the first problems you run into when you have an API for signing files is how to make sure you don't accidentally sign malicious files. We've dealt with this in a few ways:

  • You need a special token in order to request any type of signing. These tokens are time limited and only a small subset of segregated machines may request them (on behalf of the build machines). Since build jobs can only be created if you're able to push to hg.mozilla.org, random people are unable to submit anything for signing.
  • Only our build machines are allowed to make signing requests. Even if you managed to get hold of a valid signing token, you wouldn't be able to do anything with it without also having access to a build machine. This is a layer of security that helps us protect against a situation where an evil doer may gain access to a loaner machine or other less restricted part of our infrastructure.

We have other layers of security built in too (HTTPS, firewalls, access control, etc.), but these are the key ones built into the signing server itself.

Tip #3: Use input redirection and other tricks to work around unfriendly command line tools

One of the trickiest parts about automating signing is getting all the necessary command line tools to accept input that's not coming from a console. Some of them are relative easy and accept passphrases via stdin:

proc = Popen(command, stdout=stdout, stderr=STDOUT, stdin=PIPE)
proc.stdin.write(passphrase)
proc.stdin.close()

Others, like OpenSSL, are fussier and require the use of pexpect:

proc = pexpect.spawn("openssl", args)
proc.logfile_read = stdout
proc.expect('Enter pass phrase')
proc.sendline(passphrase)

And it's no surprise at all that OS X is the fussiest of them all. In order to sign you have to unlock the keychain by hand, run the signing command, and relock the keychain yourself:

child = pexpect.spawn("security unlock-keychain" + keychain)
child.expect('password to unlock .*')
child.sendline(passphrase)
check_call(sign_command + [f], cwd=dir_, stdout=stdout, stderr=STDOUT)
check_call(["security", "lock-keychain", keychain])

Although the code is simple in the end, a lot of trial, error, and frustration was necessary to arrive at it.

Tip #4: Sign everything you can on Linux (including Windows binaries!)

As fussy as automating tools like openssl can be on Linux, it pales in comparison to trying to automate anything on Windows. In the days before the signing server we had a scripted signing method that ran on Windows. Instead of providing the passphrase directly to the signing tool, it had to typed into a modal window. It was "automated" with an AutoIt script that typed in the password whenever the window popped up. This was hacky, and sometimes lead to issues if someone moved the mouse or pressed a key at the wrong time and changed window focus.

Thankfully there's tools available for Linux that are capable of signing Windows binaries. We started off by using Mono's signcode - a more or less drop in replacement for Microsoft's:

$ signcode -spc MozAuthenticode.spc -v MozAuthenticode.pvk -t http://timestamp.verisign.com/scripts/timestamp.dll -i http://www.mozilla.com -a sha1 -tr 5 -tw 60 /tmp/test.exe
Mono SignCode - version 2.4.3.1
Sign assemblies and PE files using Authenticode(tm).
Copyright 2002, 2003 Motus Technologies. Copyright 2004-2008 Novell. BSD licensed.

Enter password for MozAuthenticode.pvk: 
Success

This works great for 32-bit binaries - we've been shipping binaries signed with it for years. For some reason that we haven't figured out though, it doesn't sign 64-bit binaries properly. For those we're using "osslsigncode", which is an OpenSSL based tool to do Authenticode signing:

$ osslsigncode -certs MozAuthenticode.spc -key MozAuthenticode.pvk -i http://www.mozilla.com -h sha1 -in /tmp/test64.exe -out /tmp/test64-signed.exe
Enter PEM pass phrase:
Succeeded

$ osslsigncode verify /tmp/test64-signed.exe 
Signature verification: ok

Number of signers: 1
    Signer #0:
        Subject: /C=US/ST=CA/L=Mountain View/O=Mozilla Corporation/CN=Mozilla Corporation
        Issuer : /C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert Assured ID Code Signing CA-1

Number of certificates: 3
    Cert #0:
        Subject: /C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert Assured ID Root CA
        Issuer : /C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert Assured ID Root CA
    Cert #1:
        Subject: /C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert Assured ID Code Signing CA-1
        Issuer : /C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert Assured ID Root CA
    Cert #2:
        Subject: /C=US/ST=CA/L=Mountain View/O=Mozilla Corporation/CN=Mozilla Corporation
        Issuer : /C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert Assured ID Code Signing CA-1

In addition to Authenticode signing we also do GPG, APK, and couple of Mozilla-specific types of signing (MAR, EME Voucher) on Linux. We also sign our Mac builds with the signing server. Unfortunately, the tools needed for that are only available on OS X, so we have to run separate signing servers for these.

Tip #5: Run multiple signing servers

Nobody likes a single point of failure, so we've built support our signing client to retry against multiple instances. Even if we lose part of our signing server pool, our infrastructure stays up:
$ python signtool.py --cachedir cache -t token -n nonce -c host.cert -H dmgv2:mac-v2-signing1.srv.releng.scl3.mozilla.com:9120 -H dmgv2:mac-v2-signing2.srv.releng.scl3.mozilla.com:9120 -H dmgv2:mac-v2-signing3.srv.releng.scl3.mozilla.com:9120 -H dmgv2:mac-v2-signing4.srv.releng.scl3.mozilla.com:9120 --formats dmgv2 Firefox.app
2015-01-23 06:17:59,112 - ed40176524e7c197f4e23f6065a64dc3c9a62e71: processing Firefox.app.tar.gz on https://mac-v2-signing3.srv.releng.scl3.mozilla.com:9120
2015-01-23 06:17:59,118 - ed40176524e7c197f4e23f6065a64dc3c9a62e71: connection error; trying again soon
2015-01-23 06:18:00,119 - ed40176524e7c197f4e23f6065a64dc3c9a62e71: processing Firefox.app.tar.gz on https://mac-v2-signing4.srv.releng.scl3.mozilla.com:9120
2015-01-23 06:18:00,141 - ed40176524e7c197f4e23f6065a64dc3c9a62e71: uploading for signing
2015-01-23 06:18:10,748 - ed40176524e7c197f4e23f6065a64dc3c9a62e71: processing Firefox.app.tar.gz on https://mac-v2-signing4.srv.releng.scl3.mozilla.com:9120
2015-01-23 06:19:11,848 - ed40176524e7c197f4e23f6065a64dc3c9a62e71: processing Firefox.app.tar.gz on https://mac-v2-signing4.srv.releng.scl3.mozilla.com:9120
2015-01-23 06:19:40,480 - ed40176524e7c197f4e23f6065a64dc3c9a62e71: OK

Running your own signing server

It's easy! All of the code you need to run your own signing server is in our tools repository. You'll need to set-up a virtualenv and create your own config file, but once you're ready you can attempt to start it with the following command:

python signing-server.py signing.ini

You'll be prompted for the passphrases to your private keys. If there's any problems with your config file or the passphrases the server will fail to start. Once you've got it up and running you can use try signing! get_token.py has an example of how to generate a signing token, and signtool.py will take your unsigned files and give you back signed versions. Happy signing!

http://hearsum.ca/blog/signing-software-at-scale.html


Robert Longson: New SVG/CSS Filter support in Firefox

Среда, 28 Января 2015 г. 18:36 + в цитатник

There’s a new specification for filters that replaces the filters module in SVG 1.1. Firefox and Chrome are both implementing new features from this specification.

Firefox 30 was the first version to support feDropShadow As well as being simpler to write, feDropShadow will be faster than the equivalent individual filters as it skips some unnecessary colour conversions that we’d otherwise perform.

Firefox 35 has support for all CSS Filters so for simple cases you no longer need any SVG markup to create a filter. We’ve examples on MDN showing how to use CSS filters.

We’ve also implemented filter chaining, this is we support multiple filter either via URLs or CSS filters on a single element.

As with earlier versions of Firefox you can apply SVG and CSS filters to both SVG and HTML elements.

As part of the rewrite to support SVG filters we’ve improved their performance by using D2D on Windows to render them thus taking advantage of any hardware acceleration possibilities on that platform and on other platforms using SIMD and SSE2 to accelerate rendering so you can now use more filters without slowing your site down.


https://longsonr.wordpress.com/2015/01/28/new-svgcss-filter-support-in-firefox/


Pete Moore: Weekly review 2015-01-28

Среда, 28 Января 2015 г. 18:30 + в цитатник

Task Cluster Go Client

This week I have got the task cluster go client talking to the TaskCluster API service end points.

See: https://github.com/petemoore/taskcluster-client-go/blob/master/README.md

I also now have part of the client library auto-generating, e.g. see: https://github.com/petemoore/taskcluster-client-go/blob/master/client/generated-code.go

Shouldn’t be too far from auto-generating the entire client library soon and having it working, tested, documented and published.

http://petemoore.tumblr.com/post/109396160893


Mozilla Privacy Blog: How Mozilla Addresses the Privacy Paradox

Среда, 28 Января 2015 г. 16:39 + в цитатник
Earlier this month, a 20 year old NBA Clippers fan held up a sign in a crowded Washington DC arena with her phone number on it. Seasoned privacy professionals have long lamented the old adage that if you give someone … Continue reading

https://blog.mozilla.org/privacy/2015/01/28/how-mozilla-addresses-the-privacy-paradox/


Ryan Kelly: Are we Python yet?

Среда, 28 Января 2015 г. 16:01 + в цитатник

William Lachance: mozregression updates

Среда, 28 Января 2015 г. 02:52 + в цитатник

Lots of movement in mozregression (a tool for automatically determining when a regression was introduced in Firefox by bisecting builds on ftp.mozilla.org) in the last few months. Here’s some highlights:

  • Support for win64 nightly and inbound builds (Kapil Singh, Vaibhav Agarwal)
  • Support for using an http cache to reduce time spent downloading builds (Sam Garrett)
  • Way better logging and printing of remaining time to finish bisection (Julien Pag`es)
  • Much improved performance when bisecting inbound (Julien)
  • Support for automatic determination on whether a build is good/bad via a custom script (Julien)
  • Tons of bug fixes and other robustness improvements (me, Sam, Julien, others…)

Also thanks to Julien, we have a spiffy new website which documents many of these features. If it’s been a while, be sure to update your copy of mozregression to the latest version and check out the site for documentation on how to use the new features described above!

Thanks to everyone involved (especially Julien) for all the hard work. Hopefully the payoff will be a tool that’s just that much more useful to Firefox contributors everywhere. :)

http://wrla.ch/blog/2015/01/mozregression-updates/


Justin Wood: Release Engineering does a lot…

Среда, 28 Января 2015 г. 02:11 + в цитатник

Hey Everyone,

I spent a few minutes a week over the last month or two working on compiling a list of Release Engineering work areas. Included in that list is identifying which repositories we “own” and work in, as well as where these repositories are mirrored. (We have copies in hg.m.o git.m.o and github, some exclusively in their home).

While we transition to a more uniform and modern design style and philosphy.

My major takeaway here is we have A LOT of things that we do. (this list is explicitly excluding repositories that are obsolete and unused)

So without further ado, I present our page ReleaseEngineering/Repositories

repositoriesYou’ll notice a few things about this, we have a column for Mirrors, and RoR (Repository of Record), “Committable Location” was requested by Hal and is explicitly for cases where “Where we consider our important location the RoR, it may not necessarily be where we allow commits to”

The other interesting thing is we have automatic population of travis and coveralls urls/status icons. This is for free using some magic wiki templates I did.

The other piece of note here, is the table is generated by a list of pages, using “SemanticMediaWiki” so the links to the repositories can be populated with things like “where are the docs” “what applications use this repo”, “who are suitable reviewers” etc. (all those are TODO on the releng side so far).

I’m hoping to be putting together a blog post at some point about how I chose to do much of this with mediawiki, however in the meantime should any team at Mozilla find this enticing and wish to have one for themselves, much of the work I did here can be easily replicated for your team, even if you don’t need/like the multiple repo location magic of our table. I can help get you setup to add your own repos to the mix.

Remember the only fields that are necessary is a repo name, the repo location, and owner(s). The last field can even be automatically filled in by a form on your page (see the end of Release Engineerings page for an example of that form)

Reach out to me on IRC or E-mail (information is on my mozillians profile) if you desire this for your team and we can talk. If you don’t have a need for your team, you can stare at all the stuff Releng is doing and remember to thank one of us next time you see us. (or inquire about what we do, point contributors our way, we’re a friendly group, I promise.)

http://blog.drapostles.org/archives/143


Hannah Kane: A new online home for those who #teachtheweb

Среда, 28 Января 2015 г. 01:22 + в цитатник

We’ve recently begun work on a new website that will serve the mentors in our Webmaker community—a gathering place for anyone who is teaching the Web. They’ll find activity kits, trainings, badges, the Web Literacy Map, and more. It will also be an online clubhouse for Webmaker Clubs, and will showcase the work of Hives to the broader network.

Our vision for the site is that it will provide pathways for sustained involvement in teaching the Web. Imagine a scenario where, after hosting a Maker Party, a college student in Pune wants to build on the momentum, but doesn’t know how. Or imagine a librarian in Seattle who is looking for activities for her weekly teen drop-in hours. Or a teacher in Buenos Aires who is looking to level up his own digital literacy skills. In each of these scenarios, we hope the person will look to this new site to find what they need.

We’re in the very early stages of building out the site. One of our first challenges is to figure out the best way to organize all of the content.

Fortunately, we were able to find 14 members of the community who were willing to participate in a “virtual card-sorting” activity. We gave each of the volunteers a list of 22 content areas (e.g. “Find a Teaching Kit,” “Join a Webmaker Club,” “Participate in a community discussion”), and asked them to organize the items into groups that made sense to them.

The results were fascinating. Some grouped the content by specific programs, concepts, or offerings. Others grouped by function (e.g “Participate,” “Learn,” “Lead”). Others organized by identity (e.g. “Learner” or “Mentor”). Still others grouped by level of expertise needed.

We owe a debt of gratitude to those who participated in the research. We were able to better understand the variety of mental models, and we’re currently using those insights to build out some wireframes to test in the next heartbeat.

Once we firm up the information architecture, we’ll build and launch v1 of the site (our goal is to launch it by the end of Q1). From there, we’ll continue to iterate, adding more functionality and resources to meet the needs of our mentor community.

Future iterations will likely include:

  • Improving the way we share and discover curriculum modules
  • Enhancing our online training platform
  • Providing tools for groups to self-organize
  • Making improvements to our badging platform
  • Incorporating the next version of the Web Literacy Map

Stay tuned for more updates and opportunities to provide feedback throughout the process. We’ve also started a Discourse thread for continuing discussion of the platform.


http://hannahgrams.com/2015/01/27/a-new-online-home-for-those-who-teachtheweb/


Christian Heilmann: Where would people like to see me – some interesting answers

Среда, 28 Января 2015 г. 00:50 + в цитатник

Will code for Bananas

For pure Shits and Giggles™ I put up a form yesterday asking people where I should try to work now that I left Mozilla. By no means I have approached all the companies I listed (hence an “other” option). I just wanted to see what people see me as and where I could do some good work. Of course, some of the answers disagreed and made a lot of assumptions:

Your ego knows no bounds. Putting companies that have already turned you down is very special behavior.

This is utterly true. I applied at Yahoo in 1997 and didn’t get the job. I then worked at Yahoo for almost five years a few years later. I should not have done that. Companies don’t change and once you have a certain skillset there is no way you could ever learn something different that might make yourself appealing to others. Know your place, and all that.

Sarcasm aside, I am always amazed how lucky we are to have choices in our market. There is not a single day I am not both baffled and very, very thankful for being able to do what I like and make a living with it. I feel like a fraud many a time, and I know many other people who seemingly have a “big ego” doing the same. The trick is to not let that stop you but understand that it makes you a better person, colleague and employee. We should strive to get better all the time, and this means reaching beyond what you think you can achieve.

I’m especially flattered that people thought I had already been contacted by all the companies I listed and asked for people to pick for me. I love working in the open, but that’s a bit too open, even for my taste. I am not that lucky – I don’t think anybody is.

The answers were pretty funny, and of course skewed as I gave a few options rather than leaving it completely open. The final “wish of the people list” is:

  • W3C (108 votes)
  • Canonical (39 votes)
  • Microsoft (38 votes)
  • Google (37 votes)
  • Facebook (14 votes)
  • Twitter (9 votes)
  • Mozilla (7 votes)
  • PubNub (26 votes)

Pubnub’s entries were having exceedingly more exclamation points the more got submitted – I don’t know what happened there.

Other options with multiple votes were Apple, Adobe, CozyCloud, EFF, Futurice, Khan Academy, Opera, Spotify (I know who did that!) and the very charming “Retirement”.

Options labeled “fascinating” were:

  • A Circus
  • Burger King (that might still be a problem as I used to work on McDonalds.co.uk – might be a conflict of interest)
  • BangBros (no idea what that might be – a Nintendo Game?)
  • Catholic Church
  • Derick[SIC] Zoolander’s School for kids who can’t read good
  • Kevin Smith’s Movie
  • “Pizza chef at my local restaurant” (I was Pizza delivery guy for a while, might be possible to level up)
  • Playboy (they did publish Fahrenheit 451, let’s not forget that)
  • Taco Bell (this would have to be remote, or a hell of a commute)
  • The Avengers (I could be persuaded, but it probably will be an advisory role, Tony Stark style)
  • UKIP (erm, no, thanks)
  • Zombocom and
  • Starbucks barista. (this would mean I couldn’t go to Sweden any longer – they really like their coffee and Starbucks is seen as the Antichrist by many)

Some of the answers were giving me super powers I don’t have but show that people would love to have people like me talk to others outside the bubble more:

  • “NASA” (I really, really think I have nothing they need)
  • “A book publisher (they need help to move into the 21st century)”
  • “Data.gov or another country’s open data platform.” (did work with that, might re-visit it)
  • “GCHQ; Be the bridge between our worlds”
  • “Spanish Government – Podemos CTO” (they might realise I am not Spanish)
  • “any bank for a11y online banking :-/”

Some answers showed a need to vent:

  • “Anything but Google or Facebook for God sake!”
  • “OK, and option 2: perhaps Twitter? You might improve their horrible JS code in the website! ;)”

The most confusing answers were “My butthole” which sounds cramped and not a creative working environment and “Who are you?” which begs the answer “Why did you fill this form?”.

Many of the answers showed a lot of trust in me and made me feel all warm and fuzzy and I want to thank whoever gave those:

  • be CTO of awesome startup
  • Enjoy life Chris!
  • Start something of your own. You rock too hard, anyway!
  • you were doing just fine. choose the one where your presence can be heard the loudest. cheers!
  • you’ve left Mozilla for something else, so you are jobless for a week or so! :-)
  • Yourself then hire me and let me tap your Dev knowledge :D

I have a new job, I am starting on Monday and I will announce in probably too much detail here on Thursday. Thanks for everyone who took part in this little exercise. I have an idea what I need to do in my new job, and these ideas listed and the results showed me that I am on the right track.

http://christianheilmann.com/2015/01/27/where-would-people-like-to-see-me-some-interesting-answers/



Поиск сообщений в rss_planet_mozilla
Страницы: 472 ... 117 116 [115] 114 113 ..
.. 1 Календарь