-Поиск по дневнику

Поиск сообщений в rss_planet_mozilla

 -Подписка по e-mail

 

 -Постоянные читатели

 -Статистика

Статистика LiveInternet.ru: показано количество хитов и посетителей
Создан: 19.06.2007
Записей:
Комментариев:
Написано: 7

Planet Mozilla





Planet Mozilla - https://planet.mozilla.org/


Добавить любой RSS - источник (включая журнал LiveJournal) в свою ленту друзей вы можете на странице синдикации.

Исходная информация - http://planet.mozilla.org/.
Данный дневник сформирован из открытого RSS-источника по адресу http://planet.mozilla.org/rss20.xml, и дополняется в соответствии с дополнением данного источника. Он может не соответствовать содержимому оригинальной страницы. Трансляция создана автоматически по запросу читателей этой RSS ленты.
По всем вопросам о работе данного сервиса обращаться со страницы контактной информации.

[Обновить трансляцию]

Doug Belshaw: What does working openly on the web mean in practice? [UK Web Focus]

Среда, 12 Марта 2014 г. 18:25 + в цитатник

It’s Open Education Week. In addition to facilitating a discussion on behalf of Mozilla, I’ve got a guest post on Brian Kelly’s blog entitled What Does Working Openly on the Web Mean in Practice?

Here’s a preview:

Working open is not only in Mozilla’s DNA but leads to huge benefits for the project more broadly. While Mozilla has hundreds of paid contributors, they have tens of thousands of volunteer contributors — all working together to keep the web open and as a platform for innovation. Working open means Mozilla can draw on talent no matter where in the world someone happens to live. It means people with what Clay Shirky would call cognitive surplus can contribute as much or as little free time and labour to projects as they wish. Importantly, it also leads to a level of trust that users can have in Mozilla’s products. Not only can they inspect the source code used to build the product, but actually participate in discussions about its development.

Go and read the post in full. I’d be interested in your comments (over there – I’ve closed them here to encourage you!) :-)


Bonus: The web is 25! Remix this


Image CC BY-NC Glen Scott

http://dougbelshaw.com/blog/2014/03/12/what-does-working-openly-on-the-web-mean-in-practice-uk-web-focus/


Geoffrey MacDougall: Infographic: Our 2013 Fundraising Success

Среда, 12 Марта 2014 г. 06:28 + в цитатник

2013 was Mozilla’s most successful fundraising year ever. We grew our core operating grants and more than doubled the size of our donations campaign.

This is a shared, project-wide accomplishment. More than 40 Mozillians from across the foundation, corporation, and community pulled together to make it happen. And I’m proud of what we accomplished.

We still have a long way to go. We’re overly dependent on a few key funders and there’s a big gap between our current revenue and our goal of matching Wikimedia’s fundraising program.

But 2013 was an indication that we’re in good shape, with the right team, and a mission our community loves.

Click to enlarge

mozilla-fundraising-infographic-web


Filed under: Mozilla, Pitch Geek

http://intangible.ca/2014/03/11/infographic-our-2013-fundraising-success/


Nick Cameron: Directory tiles, incentivisation, and indirection

Среда, 12 Марта 2014 г. 06:25 + в цитатник
Note: I have absolutely nothing to do with directory tiles, but they have been on my mind (and of plenty of other peoples), so here are my thoughts. My opinions are not those of my employer, but I am very grateful to be able to voice these opinions.

I agree with the need to diversify our revenue streams over the long run, and I think I agree with the sentiment that we should not 'leave money on the table'. However, some of the discussion around directory tiles makes me uneasy. (I think the worst thing about the whole thing has been how badly the idea was communicated, but that has been pretty well established, so I won't beat that horse any further. Personally, I don't think any revenue from the idea would be significant because, aiui, only new users will see the ads and then at most nine times, which if I were a company, I wouldn't pay for. But I don't know anything about such things and there are plenty of people at Mozilla who DO know about this stuff, so I'll believe it could make money somehow).

My position only makes sense if you assume that advertising is bad per se. I do. I know many people do not, especially in the technology sector. But I think advertising is a bane of modern civilisation - not just on the internet, but in magazines, on billboards, on public transport, in sport; it is poison. It is particularly insidious in that we don't realise we are seeing and being influenced by advertising, in part by design and in part because of its ubiquity. In the rare times I have spent time away from it (the subway in Prague a decade ago, hiking in the mountains) returning to a world full of advertising feels as jarring and unpleasant as it ought to. I would love to live in a world where we paid for websites and software. If I could pay 1/10th of a cent for every page I viewed and if Mozilla could be funded by administering that and taking 1/10th of a percent of it, then I would be very happy indeed. Unfortunately, people on the whole seem to prefer free, and so it is a dream.

The argument I dislike is 'we already send search traffic to Google in exchange for cold, hard cash, and Google in turn makes money by showing these people ads; therefore, showing our users ads directly in the browser is no different or no worse or something'. As a software engineer, I know that indirection is very, very important. It is totally incorrect to treat a value the same way as a pointer to a value, and I believe the analogy holds with monetisation too. It comes down to incentives - money is a very powerful incentive (not the only one, and I trust Mozilla more than pretty much any other organisation to balance other incentives, but it is still an incentive - we each want to keep getting paid to keep doing these awesome jobs we have).

So, with the current system, in order to maximise search traffic and thus income, we might tweak our design to make the search box more prominent or otherwise encourage more users to search via the search box (I believe that this is not quite the case because we do not get paid _per search_, i.e., there is some slack in the system, and I don't think we have ever done something like this or intend to). That is not so bad, I would not feel bad about encouraging people to search the internet - it is kind of an essential task. Now Google (or whoever we might sell search traffic to in the future) is incentivised to show their users more ads, but there is no incentive for us to modify the browser to show the user more ads.

With directory tiles (or any system where we are directly showing ads to users) the above does not hold. The monetary incentive is to show users more ads and more often. And that (in my opinion) makes for a worse user experience.

In summary, when the incentive is indirect, optimising for it does not negatively affect our users. When the incentive is direct, optimising for it does negatively impact our users. And that makes me very uneasy about going down that road.

However, I fear we might have to. We need money to fund our mission for a better web and the current situation may not last forever (maybe it will, and we can all live happily ever after, but I fear everything changes). Fortunately, I trust Mozilla better than anyone to ignore the incentive described above, I just wish we didn't have to ignore it.

http://featherweightmusings.blogspot.com/2014/03/directory-tiles-incentivisation-and.html


Armen Zambrano Gasparnian: Debian packaging and deployment in Mozilla's Release Engineering setup

Среда, 12 Марта 2014 г. 01:51 + в цитатник
I'm been working on creating my second Debian package for Mozilla's Release Engineering infrastructure and it's been a pain like the first one.

To be honest, it's been hard to figure out the correct flow and to understand what I was doing.
In order to help other people in the future, I decided to document the process and workflow.
This is not to replace the documentation but to help understand it.

If you're using a Mac or a Windows machine, notice that we have a VM available on EC2 that has the tools you need: ubuntu64packager1.srv.releng.use1.mozilla.com. The documentation can be found in "How to build DEBs". You can use this blog post to help you get up to speed.

During our coming work week we will look at a complete different approach to make changes like this easier for developers to make without Release Engineering intervention. It is not necessarily a self-serve for Debian deployments.

Goal

We want to upgrade a library or a binary on our infrastructure.
For Linux, we use Puppet to deploy packages and we deploy it through a Debian repository.
Before we deploy the package through Puppet, we have to add the package to our internal Debian repository. This blog post will guide you to:

  1. Create the .deb files
  2. Add them to our internal Debian repository
  3. Test the deployment of the package with Puppet

Debian packaging

For a newbie, it can be a very complicated system that has many many parts.

In short, I've learned that there are involved three different files that allow you to recreate the .deb files. The files extensions are: .dsc, .orig.tar.gz and .diff.gz. If you find the source package page for your desired package, you will notice that these 3 files are available to download. We can use the .dsc file to generate all the .deb files.

For full info you can read the Debian Packaging documentation and/or look at the building tutorial to apply changes to an existing package.

Ubuntu version naming

If I understand correctly (IIUC), "precise" is an identifier for a Ubuntu release. In our case it refers to Ubuntu 12.04 LTS.

Versions of a package

IIUC, a package can have 3 different versions or channels:
  • release. The version that came out with a specific release
    • Ubuntu 12.04 came out with mesa 8.0.2-0ubuntu3
  • security. The latest security release
    • e.g. mesa 8.0.4-0ubuntu0.6
  • updates. The latest update
    • e.g. mesa 8.0.4-0ubuntu0.7
If you load the "mesa" source package page, you will find a section called "Versions published" and you will see all three versions listed there.

Precise, not precise-updates

In our specific releng setup, we always use "precise" as the distribution and not "precise-updates".
I don't know why.

Repackage the current version or the latest one?

If you're patching a current package, do not try jump to the latest available version unless necessary. Choose the version closest to our current package to reduce the number of new dependencies.

In my case I was trying to go for mesa 8.0.4-0ubuntu0.7 instead of mesa 8.0.2-0ubuntu3.
Due to that, I had all sorts of difficulties and it had lots of new dependencies.
Even then, I realized later that I had to go for mesa 8.0.4-0ubuntu0.6 as a minimum.

Puppetagain-build-deb OR pbuilder?

From Mozilla's Release Engineering's prespective, we're only considering two ways of creating our .deb files: 1) puppetagain-build-deb and 2) pbuilder.

FYI puppetagain-build-deb was written to make it very simple to create the required .deb files.
Unfortunately, in my case, puppetagain-build-deb could only handle the dependencies of 0.8.2 and not the ones of 0.8.4.

I describe how to use pbuilder in the section "Create the debian/ directory".
Below is the "puppetagain-build-deb" approach. Also documented in here.

Puppetagain-build-deb

At this point we have the "package_name-debian" directory under modules/packages/manifests in Puppet. Besides that, we need to download ".orig.tar.gz" file.

To create the .deb files we need 1) the debian directory + 2) the original tar ball.

In most cases, we should be able to use ubuntu64packager1 and puppetagain-build-deb to build the deb files. If not,

NOTE: The .orig.tar.gz file does not need to be committed.

cd puppet
hg up -r d6aac1ea887f #It has the 8.0.2 version checked-in
cd modules/packages/manifests
wget https://launchpad.net/ubuntu/+archive/primary/+files/mesa_8.0.2.orig.tar.gz
# The .deb files will appear under /tmp/mesa-precise-i386
puppetagain-build-deb precise amd64 mesa-debian
# The .deb files will appear under /tmp/mesa-precise-amd64
puppetagain-build-deb precise i386 mesa-debian

Create the debian/ directory

In Puppet we have "debian" directories checked-in (e.g. mesa-debian/) for any debian package we deploy to our systems through it. The debian directory is produced with the standard Debian packing instructions.

If you have access to a Linux machine you can follow the steps that rail gave me to generate the deb files. You can also log-in to ubuntu64packager1 (you have to start it up first).

To make it work locally, I had to install pbuilder with "sudo apt-get install pbuilder".
I also needed to create my own pbuilder images.

In short, to recreate .deb files without modifying them you can follow these steps:
  1. use dget to downloads all three required files (.dsc, .orig.tar.gz and .diff.gz)
  2. use pbuilder --build to generate the .deb files
Since we want to patch the libraries rather than use them as-is, we also have to run these steps in between step 1 & step 2:
  1. dpkg-source -x
    • it extracts the source files
  2. download your patch under debian/patches
  3. append a line to debian/patches/series
    • the line indicates the filename of your patch under debian/patches
  4. set DEBFULLNAME
    • to bump the version when repackagin the source
  5. dpkg-source -b
    • rebuild the source package
You can read rail's explanation for full details.

Keep track of the debian/ directory in Puppet

The previous section should have generated your desired "debian" directory.
We now need to check it inside of our puppet repository to keep track of it.
cp -r mesa-8.0.4/debian ~/puppet/modules/packages/manifests/mesa-debian
cd ~/puppet
hg addremove
hg diff

Having Debian packaging issues?

rail and dustin have experience in this area, however, if we have further Debian packaging issues we can reach sylvestre and glandium.

Determine involved libraries

To create our Puppet patch, we have to determine which packages are involved.
For instance, the mesa bug required updating five different libraries.
rail explains on comment 26 how to discover which libraries are involved.
You can list the package names you compiled with something like this:
ls *deb | awk -F_ '{print $1}' | xargs
# copy the list of names and run the following on the target machine:
dpkg -l 2>/dev/null | grep ^ii | awk '{print $2}'

Create a no-op puppet change (pinning the version)

If the package already exists on our infra but it is not managed by Puppet (e.g. the library came by default on the OS), then it is better to write first a puppet change to pin the versions.

To write the puppet change you will have to answer these questions:
  • Do we want this change for the in-house and ec2 machines? Or a subset?
  • Do we want the change for both 64-bit and 32-bit machines?
  • What are the versions currently running on the machines that would be affected?
    • Check on each pool you're planning to deploy it since we could have inconsistencies between them
Answering these questions will determine which files to modify in puppet.
Remember that you will have to test that your puppet change runs without issues.

Integrating your .deb files into the releng Debian repository and sync to the puppet masters

The documentation is here. And here's what I did for it.

1 - Sync locally the Debian packages repository
We need to sync locally from the "distinguished master" the "releng", "conf" and "deb" directories:
sudo su
rsync -av releng-puppet2.srv.releng.scl3.mozilla.com:/data/repos/apt/releng/ /data/repos/apt/releng/
rsync -av releng-puppet2.srv.releng.scl3.mozilla.com:/data/repos/apt/conf/ /data/repos/apt/conf/
rsync -av releng-puppet2.srv.releng.scl3.mozilla.com:/data/repos/apt/db/ /data/repos/apt/db/

2 - Import your .deb files into the Debian repo

cd /data/repos/apt
cp ~armenzg/tmp/mesa_8.0.4.orig.tar.gz releng/pool/main/m/mesa
reprepro -V --basedir . include precise ~armenzg/tmp/out64/*.changes
reprepro -V --basedir . includedeb precise ~armenzg/tmp/out32/*.deb

If the package is new you will also have to place the .orig.tar.gz file under /data/repos/apt/releng. The reprepro will let you know as it will fail until you do.

3 - Rsync the repo and db back to the distinguished master
Push your file back to the official repository:
rsync -av /data/repos/apt/releng/ releng-puppet2.srv.releng.scl3.mozilla.com:/data/repos/apt/releng/
rsync -av /data/repos/apt/db/ releng-puppet2.srv.releng.scl3.mozilla.com:/data/repos/apt/db/

Your files should show up in here:
http://puppetagain.pub.build.mozilla.org/data/repos/apt/releng/pool/main

NOTE: Pushing the .deb files to the repo does not update the machines.

4 - Fix the permissions at the distinguished master
ssh root@releng-puppet2.srv.releng.scl3.mozilla.com
puppetmaster-fixperms

Test that you can update

Before you can sync up a host with puppet you need to let the puppet servers sync up with the distinguished master.

For instance, my puppet runs were failing because the packages were missing at:
http://puppetagain-apt.pvt.build.mozilla.org/repos/apt/releng/pool/main/m/mesa

To test my changes, I created two EC2 instances. For other pools you will have to pull a machine from production.

1 - Prepare your user environment
ssh armenzg@releng-puppet2.srv.releng.scl3.mozilla.com
cd /etc/puppet/environments/armenzg/env
hg pull -u && hg st

2 - Run a no-op test sync from your loaned machines
puppet agent --test --environment=armenzg --server=releng-puppet2.srv.releng.scl3.mozilla.com

3 - Under your user environment on the puppet master, bump the versions and the repoflag


4 - Run puppet syncs again on the test instances and watch for the changes on the Puppet output

puppet agent --test --environment=armenzg --server=releng-puppet2.srv.releng.scl3.mozilla.com

5 - Review the package versions are the right ones


6 - Test a rollback scenario

You will have to remove the bumping of the versions from step #3 and bump the repoflag again.
Run steps 4 and 5 to see that we downgrade properly.

7 - Clean up ubuntu64packager1 and shut it off

8 - Deploy your change like any other Puppet change

Read all the steps at once

Commands and minimum comments:
https://bugzilla.mozilla.org/show_bug.cgi?id=975034#c37



Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

http://armenzg.blogspot.com/2014/03/debian-packaging-and-deployment-in.html


Andrea Marchesini: Audio volume and mute per window object

Среда, 12 Марта 2014 г. 01:06 + в цитатник

I have finally found some time to finish a set of patches about a nice feature that will allow addon/firefox developers to control audio volumes for window object.

Through this new feature any window object has two new attributes: audioMuted and audioVolume (accessible from chrome code only using nsIDOMWindowUtils). The aim is to change the volume of any HTML5 media element and any WebAudio destination node (soon WebRTC and also FMRadio API). The control of the volumes works “on cascade” - if a window has an iframe, the iframe elements will be effected by the parent window audio attributes.

The code is just landed on m-c and it will be available on nightly in a few hours.

Also, in order to test this feature I wrote an addon. As you could see, the UI is not the best… I know, but it was just a proof of concept, I’m sure somebody else will do a better work! Download the addon.

This feature is currently disabled by default, but it’s easy to enable it by changing or creating a preference in about:config. Some instructions to do it: open ‘about:config’ in a new tab and add a new boolean preference called ‘media.useAudioChannelService' and set it to true. This property will enable the AudioChannelService for any HTMLMediaElement and any WebAudio destination node.

AudioChannelService is the audio policy controller of Firefox OS. You will know once you use it, that the AudioChannelService is enabled when, while changing tabs, media elements of invisible tabs will be muted. From now on, you can use the addon.

The addon UI can be open from Tools -> Web Developers -> Audio Test.

Here a screenshot:

AudioTest addon

From a code point of view, you can play with this audio feature from the nsIDOMWindowUtils interface. For instance:

var currentBrowser = tabbrowser.getBrowserAtIndex(0 /* an index */);
var utils = currentBrowser.contentWindow
                          .QueryInterface(Ci.nsIInterfaceRequestor)
                          .getInterface(Ci.nsIDOMWindowUtils);
dump("The default audioVolume should be 1.0: " + utils.audioVolume + "\n");
utils.audioVolume = 0.8;
dump("By default the window is not muted: " + utils.audioMuted + "\n");
utils.audioMuted = true;

(More info about tabbrowser object can be read from this page)

There is also a notification that is dispatched when a window starts and stops playing audio: media-playback. The value of this notification can be active or inactive. If you are interested about how to use them, there are a couple of tests on bug 923247.

What can we do with this new feature? Here some ideas:

  • we could change or animate the tab icon to let the users know which tab is playing.
  • addons to mix audio between tabs
  • advanced settings for disable audio by default - either to enable audio just for the visible tab or to specific origins, etc.
  • other options, such as “disable audio for all the tabs but this one” in some context menu.

Please, use this feature and let me know if you have problems, new ideas or needs. Good hack!

http://143th.net/post/79292963535


Lawrence Mandel: Stepping Down as Chair of the Engineering Meeting

Среда, 12 Марта 2014 г. 00:24 + в цитатник

MozillaEngineeringSuperAs I previously shared, I have accepted a new role at Mozilla. As my responsibilities have changed, I am stepping down as the chair of the Engineering Meeting.

Looking back over the last year or so of running this meeting, I am pleased by the positive reaction to the meeting reboot in June 2013, where we refocused on the needs of engineering, and by the successful follow on changes, such as including additional engineering teams and broadcasting and archiving the meeting on Air Mozilla.

I would like to thank everyone who took the time to provide feedback about the meeting. The changes to the meeting were a direct result of our conversations. I would also like to thank Richard Milewski and the Air Mozilla team for working out how to broadcast the meeting to our global audience each week.

I chaired my last meeting on Mar 5, 2014. You can watch my swan song on Air Mozilla.

Chris Peterson takes over as the chair of the Engineering Meeting effective this week.


Tagged: engineering, meeting, mozilla

http://lawrencemandel.com/2014/03/11/stepping-down-as-chair-of-the-engineering-meeting/


Sean McArthur: Persona is dead, long live Persona

Вторник, 11 Марта 2014 г. 22:19 + в цитатник

The transition period was really tough for me. It felt like we were killing Persona. But more like tying a rope around it and dragging it behind us as we road tripped to Firefox OS Land. I first argued against this. Then, eventually I said let’s at least be humane, and take off the rope, and put a slug in its head. Like an Angel of Death. That didn’t happen either. The end result is one where Persona fights on.

Persona is free open source software, and has built up a community who agree that decentralized authentication is needed o the Internet. I still think Persona is the best answer in that field, and the closest to becoming the answer. And it’s not going away. We’re asking that the Internet help us make the Internet better.

Firefox Accounts

In the meantime I’ll be working on our Firefox Accounts system, which understandably could not rely entirely on Persona1. We need to keep Firefox competitive, since it’s what pays for us to do all the other awesomizing we do. Plus, as the Internet becomes more mobile and more multi-device, we need to make sure there is an alternative that puts users first. A goal of Firefox Accounts is to be pluggable, and to integrate with other services on the Web. Why should your OS demand you use their siloed services? If you want to use Box instead of iCloud, we want you to use it.

How does this affect Persona? We’re actually using browserid assertions within our account system, since it’s a solved problem that works well. We’ll need to work on a way to get all sorts of services working with your FxAccount, and it might include proliferating browserid assertions everywhere2. As we learn, and grow the service so that millions of Firefox users have accounts, we can explore easing them into easily and automatically being Persona users. This solves part of the chicken-egg problem of Persona, by having millions of users ready to go.

I’d definitely rather this have ended up differently, but I can also think of far worse endings. The upside is, Persona still exists, and could take off more so with the help of Firefox. Persona is dead, long live Persona!


  1. Sync needs a “secret” to encrypt your data before it’s sent to our servers. The easiest solution for users is to provide us a password, and we’ll stretch that and make a secret out of it (so, we don’t actually know your password). Persona doesn’t give us passwords, so we can’t use it. 

http://seanmonstar.com/post/79278627673


Planet Mozilla Interns: Michael Sullivan: Inline threading for TraceMonkey slides

Вторник, 11 Марта 2014 г. 20:26 + в цитатник

Melissa Romaine: Make Things Do Stuff

Вторник, 11 Марта 2014 г. 18:24 + в цитатник

Digital Learning with Young People in the United Kingdom

http://makethingsdostuff.co.uk/

[Adapted from a panel talk I gave at DML 2014 on March 7, 2014 in Boston, MA.]

There’s a lot of exciting digital making happening in the United Kingdom, so I want to share the story of Make Things Do Stuff, a network and website of maker-focused organizations -- including Mozilla -- that promote digital learning among young people in the UK.

First, let’s set the scene.
In January 2011, Nesta -- an Innovation Foundation in the UK -- published the “Next Gen.” report, co-authored by Sir Ian Livingstone and Alex Hope. Livingstone was part of the team that supported and distributed games such as Dungeons & Dragons, and most famously Lara Croft: Tomb Raider. The authors found at that time that the UK video games sector brought in over lb2 billion in sales, and was larger than both the film and music industries. Between 2006 and 2008 the visual effects sector -- encompassing both film and video games -- grew at 16.8% with most of its talent being local. Just after 2008, however, the industry quickly began losing its local talent to overseas competition, and was forced to source overseas talent of its own to keep sales high.


The over-arching conclusion of the report was that the education system failed to fill the skills gap in the industry. Next Gen. looked at how this problem could be tackled, and gave two major recommendations:
  • Put Computer Science on the national curriculum.
  • Have GCSE (Graduate Certificate of Secondary Education) in all schools.
=> Quick Explanation: Unlike in many countries where high school graduates receive one certificate for satisfactory completion of course work -- a diploma or GED -- in the UK students between 14 and 16 years of age take GCSE exams in each subject -- some compulsory, and some elective.

Stemming from Next Gen. came 2 years of consultation between education specialists, technology experts, and government policy advisers to build a computer science curriculum. The view taken is that coding/programming is an essential skill to join the job force. Thus, starting in Key Stage 1 students aged 5-7 will be introduced to algorithms and logical reasoning. With each successive Key Stage, students will build up their knowledge and skills base, and by the end of the Key Stage 4 (age 14-16) they will be able to code in at least 2 languages, and have the creative and technical abilities for more sophisticated study in CS or a professional career. The curriculum goes into full effect this September.

Learning through making
At the same time, yearly spending on digital education in schools reached into the hundreds of millions, and yet real transformation in learning and teaching remained elusive. Could it be that interactive whiteboards and one-to-one tablet schemes aren’t the final solution?  So Nesta teamed up with London Knowledge Lab and Learning Sciences Research Institute to see how teachers and learners could be more engaged in the design and use of learning technologies. After researching 8 types of learning with technology, they largely concluded that learning through making is one of the more effective strategies. (Decoding Learning: The Proof, Promise, and Potential of Digital Education.)

With solid research in hand, Nesta, Nominet Trust (a funder for socially-minded tech solutions), and Mozilla (a socially-minded engineering organization) banded together to create Make Things Do Stuff. This relationship works not only because we have robust research, funding and tools, but also because we all recognize the importance of bringing together other organizations in the making space, and know that the collaborative effort is greater than the sum of its parts.

With over 40 organizations in our network, we have great depth and representation across a lot of disciplines. Our partners include everyone from small after-school coding clubs and DIY digital making haberdasheries to large tech event planners and government supporters like the Cabinet Office.

But the best part of Make Things Do Stuff is the Youth Editorial team, a group of 25 super talented young makers with interests ranging from programming apps that tackle social issues to musicians with great YouTube followings.  Some speak at youth conferences as evangelists, putting a relatable face on the movement, while others run hackathons in local communities. This stellar team creates content for the website – by young people for young people – and invites others like them to make things and share their stories.

Challenges
Of course it isn’t all smooth sailing. With so many stakeholders pulling in the same direction, it’s tough to make sure everyone feels visible and that their values are prioritized. As the educational Events Manager helping wrangle everyone, my three main pain-points are:

Audience: We work with organizations, not schools, so a lot of young people we see are already highly motivated to learn through making. While it’s great that we’re reaching them through their passions and building on them, I wonder about the young people we’re not reaching. To mitigate this, we try to attend a variety of events – everything from the nationwide Big Bang Fair with 65,000 young people getting their digital hands dirty over 4 days, to small-scale workshops where 25 school children made robots out of plastic cups, remixed our Keep Calm And…Thimble make, and created circuits out of play-dough at an event hosted by the new Children’s Museum at MozLDN. (Some fun remixes: Live long and Prosper, Freak Out and Throw Stuff, Eat Sleep Rave Repeat.)

“Brand Soup”: Although all of the organizations are under the banner of Make Things Do Stuff, we also have responsibilities to our individual organizations to increase visibility and brand recognition. When we’re at events sometimes all you see is a bunch of logos on a sign, and I wonder, what are we really promoting?  To make sure we don’t get lost in the politics of brand soup, we bring it all back to our shared mission and message: we’re here to help everyone move beyond digital consumption to digital creation. We focus on the young people and remember that we’re here for them, not the other way around. And suddenly, it’s clear skies ahead.

Gathering Data: Again, because we’re not partnered with schools it can be difficult to measure the effect our efforts have on the overall learning environment. Moreover, it’s near impossible to come up with a universal definition of effect; are we measuring national test outcomes? Are we looking at job-readiness skills? This one continues to be a challenge, but as the maker landscape changes, I look forward to seeing solutions surface.

Despite our difficulties with data, I’m happy to share that we reached 100,000+ young people over 3 months last summer – our first summer – thanks to our collaborative efforts. Make Things Do Stuff will also continue to change and grow as new technologies enter the field, and as young people find new ways to use old technologies. It’s an exciting time to be in this space, and I hope you’ll become a part of our ever-evolving story. 
http://makethingsdostuff.co.uk/

http://londonlearnin.blogspot.com/2014/03/make-things-do-stuff.html


Gervase Markham: The Necessity of Management

Вторник, 11 Марта 2014 г. 15:48 + в цитатник

Getting people to agree on what a project needs, and to work together to achieve it, requires more than just a genial atmosphere and a lack of obvious dysfunction. It requires someone, or several someones, consciously managing all the people involved. Managing volunteers may not be a technical craft in the same sense as computer programming, but it is a craft in the sense that it can be improved through study and practice.

– Karl Fogel, Producing Open Source Software

http://feedproxy.google.com/~r/HackingForChrist/~3/42x2_reTePU/


William Duyck: Open Education and the Open Web – Day 2

Вторник, 11 Марта 2014 г. 15:04 + в цитатник

Today is day 2 of the open education week, and an interested question has been asked.

Questions: What do you see as the link between Open Education and the Open Web? Does the former depend on the latter?

I took the time to answer this over on my Year In Industry blog, so go and take a nose… OR join the discussion over on Google +.

The tl;dr for me is that an Open Education does not require the use of the Open Web. But it helps.

http://blog.wduyck.com/2014/03/open-education-and-the-open-web-day-2/


Doug Belshaw: On the link between Open Education and the Open Web

Вторник, 11 Марта 2014 г. 14:30 + в цитатник

I’m currently moderating a discussion as part of Open Education Week on behalf of Mozilla. In today’s discussion prompt I asked:

What do you see as the link between Open Education and the Open Web? Does the former depend on the latter?

It’s a question that depends on several things, not least your definition of the two terms under consideration. Yesterday, in answer to the first discussion prompt, I used Mozilla Thimble to make this:

Open Education means collaborating, sharing and working in ways that benefit students and fellow educators.

The above would be my current (brief) definition of Open Education. But what about the Open Web? Here I’m going to lean on Mark Surman’s definition from 2010:

Open web = freedom, participation, decentralization and generativity.

That last word, ‘generativity’ is an interesting one. Here’s part of the definition from Wikipedia:

Generativity in essence describes a self-contained system from which its user draws an independent ability to create, generate, or produce new content unique to that system without additional help or input from the system’s original creators.

As an educator, I believe that the role of teachers is to make themselves progressively redundant. That is to say, the learner should take on more and more responsibility for their own learning. Both teachers and learners can work together within an Open Educational Ecosystem (OEE) that is more than the sum of its parts.

The more I think about it, this is how the Open Web is similar to Open Education. Both are trying to participate in a generative ecosystem benefitting humankind. It’s about busting silos. It’s about collaborating and sharing.

Does Open Education depend upon the Open Web? No, I wouldn’t say it that strongly. Open Education can happen without technology; you can share ideas and resources without the web. However, the Open Web significantly accelerates the kind of sharing and collaboration that can happen within an OEE. In other words, the Open Web serves as a significant catalyst for Open Education.

What do you think? What’s the relationship between Open Education and the Open Web?

Join the discussion!

http://dougbelshaw.com/blog/2014/03/11/on-the-link-between-open-education-and-the-open-web/


Marco Zehe: Easy ARIA Tip #7: Use “listbox” and “option” roles when constructing AutoComplete lists

Вторник, 11 Марта 2014 г. 14:03 + в цитатник

One question that comes up quite frequently is the one of which roles to use for an auto-complete widget, or more precisely, for the container and the individual auto-complete items. Here’s my take on it: Let’s assume the following rough scenario (note that the auto-complete you have developed may or may not work in the same, but a similar way):

Say your auto-complete consists of a textbox or textarea that, when typing, has some auto-complete logic in it. When auto-complete results appear, the following happens:

  1. The results are being collected and added to a list.
  2. The container gets all the items and is then popped into existence.
  3. The user can now either continue typing or press DownArrow to go into the list of items.
  4. Enter or Tab select the current item, and focus is returned to the text field.

Note: If your widget does not support keyboard navigation yet, go back to it and add that. Without that, you’re leaving a considerable amount of users out on the advantages you want to provide. This does not only apply to screen reader users.

The question now is: Which roles should the container and individual items get from WAI-ARIA?Some think it’s a list, others think it’s a menu with menu items. There may be more cases, but those are probably the two most common ones.

My advice: use the listbox role for the container, and option for the individual auto-complete items the user can choose. The roles menubar, menu, and menuitem plus related menuitemcheckbox and menuitemradio roles should be reserved for real menu bar/dropdown menu, or context menu scenarios. But why, you may ask?

The short version: Menus on Windows are a hell of a mess, and that’s historically rooted in the chaos that is the Win32 API. Take my word for it and stay out of that mess and the debugging hell that may come with it.

The long version: Windows has always known a so-called menu mode. That mode is in effect once a menu bar, a drop-down menu, or a context menu become active. This has been the case for as long as Windows 3.1/3.11 days, possibly even longer. To communicate the menu mode state to screen readers, Windows, or more precisely, Microsoft Active Accessibility, uses four events:

  1. SystemMenuStart: A menu bar just became active.
  2. SystemMenuPopupStart: If a SystemMenuStart event had been fired before, a drop-down menu just became active. If a SystemMenuStart event had not been fired before, a context menu just became active. If another SystemMenuPopupStart preceeded this one,  a sub menu just opened.
  3. SystemMenuPopupEnd: The popup just closed. Menu mode returns to either the previous Popup in the stack (closing of a sub menu), the menu bar, or falls out of menu mode completely.
  4. SystemMenuEnd: A menu bar just closed.

These events have to arrive in this exact order. Screen readers like JAWS or Window-Eyes rely heavily on the even order to be correct, and they ignore everything that happens outside the menus once the menu mode is active. And even NVDA, although it has no menu mode that is as strict as that of other “older” screen readers, relies on the SystemMenuStart and SystemMenuPopupStart events to recognize when a menu gained focus. Because the menu opening does not automatically focus any item by default. An exception is JAWS, which auto-selects the first item it can once it detects a context or start menu opening.

You can possibly imagine what happens if the events get out of order, or are not all fired in a complete cycle. Those screen readers that rely on the order get confused, stay in a menu mode state even when the menus have all closed etc.

So, when a web developer uses one of the menu roles, they set this whole mechanism in motion, too. Because it is assumed a menu system like a Windows desktop app is being implemented, browsers that implement WAI-ARIA have to also send these events to communicate the state of a menu, drop-down or context or sub menu.

So, what happens in the case of our auto-complete example if you were to use the role menu on the container, and menuitem on the individual items? Let’s go back to our sequence from the beginning of the post:

  1. The user is focused in the text field and types something.
  2. Your widget detects that it has something to auto-complete, populates the list of items, applies role menuitem to each, and role menu to the container, and pops it up.
  3. This causes a SystemMenuPopupStart event to be fired.

The consequences of this event are rather devastating to the user. Because you just popped up the list of items, you didn’t even set focus to one of its items yet. So technically and visually, focus is still in your text field, the cursor is blinking away merrily.

But for a screen reader user, the context just changed completely. Because of the SystemMenuPopupStart event that got fired, screen readers now have to assume that focus went to a menu, and that just no item is selected yet. Worse, in the case of JAWS, the first item may even get selected automatically, producing potentially undesired side effects!

Moreover, the user may continue typing, even use the left and right arrow keys to check their spelling, but the screen reader will no longer read this to them, because their screen reader thinks it’s in menu mode and ignores all happenings outside the “menu”. And one last thing: Because you technically didn’t set focus to your list of auto-complete items, there is no easy way to dismiss that menu any more.

On the other hand, if you use listbox and option roles as I suggested, none of these problems occur. The list will be displayed, but because it doesn’t get focus yet, it doesn’t disturb the interaction with the text field. When focus gets into the list of items, by means of DownArrow, the transition will be clearly communicated, and when it is transitioning back to the text field, even when the list remains open, that will be recognized properly, too.

So even when you sighted web developers think that this is visually similar to a context menu or a popup menu or whatever you may want to call it, from a user interaction point of view it is much more like a list than a menu. A menu system should really be confined to an actual menu system, like the one you see in Google Docs. The side effects of the menu related roles on Windows are just too severe for scenarios like auto-completes. And the reason for that lies in over 20 years of Windows legacy.

Some final notes: You can spice up your widget by letting the user know that auto-complete results are available via a text that gets automatically spoken if you add it in a text element that is moved outside the viewport, but apply an attribute aria-live=”polite” to it. In addition, you can use aria-expanded=”true” if you just popped up the list, and aria-expanded=”false” if it is not there, both applied to your input or textarea element. And the showing and hiding of the auto-complete list should be done via display:none; or visibility:hidden; and their counterparts, or they will appear somewhere in the user’s virtual buffer and cause confusion.

A great example of all of this can be seen in the Tweet composition ContentEditable on twitter.com.

I also sent a proposal for an addition to the Protocols and Formatting Working Group at the W3C, because the example in the WAI-ARIA authoring practices for an auto-complete doesn’t cover most advanced scenarios, like the one on Twitter and others I’ve come across over time. Hope the powers that may be follow my reasoning and make explicit recommendations regarding the use of roles that should and shouldn’t be used for auto-completes!

flattr this!

http://www.marcozehe.de/2014/03/11/easy-aria-tip-7-use-listbox-and-option-roles-when-constructing-autocomplete-lists/


Daniel Stenberg: http2 in curl

Воскресенье, 09 Марта 2014 г. 20:45 + в цитатник

While the first traces of http2 support in curl was added already back in September 2013 it hasn’t been until recently it actually was made useful. There’s been a lot of http2 related activities in the curl team recently and in the late January 2014 we could run our first command line inter-op tests against public http2 (draft-09) servers on the Internet.

There’s a lot to be said about http2 for those not into its nitty gritty details, but I’ll focus on the curl side of this universe in this blog post. I’ll do separate posts and presentations on http2 “internals” later.

A quick http2 overview

http2 (without the minor version, as per what the IETF work group has decided on) is a binary protocol that allows many logical streams multiplexed over the same physical TCP connection, it features compressed headers in both directions and it has stream priorities and more. It is being designed to maintain the user concepts and paradigms from HTTP 1.1 so web sites don’t have to change contents and web authors won’t need to relearn a lot. The web will not break because of http2, it will just magically work a little better, a little smoother and a little faster.

In libcurl we build http2 support with the help of the excellent library called nghttp2, which takes care of all the binary protocol details for us. You’ll also have to build it with a new enough version of the SSL library of your choice, as http2 over TLS will require use of some fairly recent TLS extensions that not many older releases have and several TLS libraries still completely lack!

The need for an extension is because with speaking TLS over port 443 which HTTPS implies, the current and former web infrastructure assumes that we will speak HTTP 1.1 over that, while we now want to be able to instead say we want to talk http2. When Google introduced SPDY then pushed for a new extension called NPN to do this, which when taken through the standardization in IETF has been forked, changed and renamed to ALPN with roughly the same characteristics (I don’t know the specific internals so I’ll stick to how they appear from the outside).

So, NPN and especially ALPN are fairly recent TLS extensions so you need a modern enough SSL library to get that support. OpenSSL and NSS both support NPN and ALPN with a recent enough version, while GnuTLS only supports ALPN. You can build libcurl to use any of these threes libraries to get it to talk http2 over TLS.

http2 using libcurl

(This still describes what’s in curl’s git repository, the first release to have this level of http2 support is the upcoming 7.36.0 release.)

Users of libcurl who want to enable http2 support will only have to set CURLOPT_HTTP_VERSION to CURL_HTTP_VERSION_2_0 and that’s it. It will make libcurl try to use http2 for the HTTP requests you do with that handle.

For HTTP URLs, this will make libcurl send a normal HTTP 1.1 request with an offer to the server to upgrade the connection to version 2 instead. If it does, libcurl will continue using http2 in the clear on the connection and if it doesn’t, it’ll continue using HTTP 1.1 on it. This mode is what Firefox and Chrome will not support.

For HTTPS URLs, libcurl will use NPN and ALPN as explained above and offer to speak http2 and if the server supports it. there will be http2 sweetness from than point onwards. Or it selects HTTP 1.1 and then that’s what will be used. The latter is also what will be picked if the server doesn’t support ALPN and NPN.

Alt-Svc and ALTSVC are new things planned to show up in time for http2 draft-11 so we haven’t really thought through how to best support them and provide their features in the libcurl API. Suggestions (and patches!) are of course welcome!

http2 with curl

Hardly surprising, the curl command line tool also has this power. You use the –http2 command line option to switch on the libcurl behavior as described above.

Translated into old-style

To reduce transition pains and problems and to work with the rest of the world to the highest possible degree, libcurl will (decompress and) translate received http2 headers into http 1.1 style headers so that applications and users will get a stream of headers that look very much the way you’re used to and it will produce an initial response line that says HTTP 2.0 blabla.

Building (lib)curl to support http2

See the README.http2 file in the lib/ directory.

This is still a draft version of http2!

I just want to make this perfectly clear: http2 is not out “for real” yet. We have tried our http2 support somewhat at the draft-09 level and Tatsuhiro has worked on the draft-10 support in nghttp2. I expect there to be at least one more draft, but perhaps even more, before http2 becomes an official RFC. We hope to be able to stay on the frontier of http2 and deliver support for the most recent draft going forward.

PS. If you try any of this and experience any sort of problems, please speak to us on the curl-library mailing list and help us smoothen out whatever problem you got!

cURL

http://daniel.haxx.se/blog/2014/03/09/http2-in-curl/


Tantek Celik: Mockups For People Focused Mobile Communication

Воскресенье, 09 Марта 2014 г. 05:12 + в цитатник

I've been iterating on mockups for people focused mobile communication for a while on the IndieWebCamp wiki for my own publishing tool Falcon, but the mockups deserve a blog post of their own.

Going back to the original people focused mobile communication experience, we've already figured out how to add a personal icon to your site so that visitors can choose "Add to Home Screen" (or similar menu option) to add icons of people (represented by their site) directly to their mobile home screens where they normally organize their apps.

screenshot of an iPod touch home screen with two rows of people icons

The next step is to mockup what happens when they select an icon of a person and it launches their website.

Home page for iOS7

I started with a mockup for how I could present communication options on my home page when viewed on an iOS7 mobile device, figuring if I can create a seamless experience there, adapting it to other mobile devices, desktop etc. would be fairly straightforward.

Thus when someone selects an icon of a person and it launches their website, they might see a home page view like this:

mobile website icon header
mobile website content

This is a hybrid approach, providing a look and feel familiar to the user from their "native" environment (smooth, seamless, confidence invoking), with very simply styled web content right below it so if that's all they want, they get it immediately.

Home with contact options

Continuing with the user flow, since they want to contact you, they select the "Contact" folder, which opens up accordingly. From there the user selects which "app" they want and it launches automatically into a new message/connection, skipping any distracting inboxes.

mobile website icon header
mobile website content

The various contact options are presented in preference order of the contactee.

Each of these can be optionally hidden based on presence status / availability, or time of day.

A subset of these could also be presented publicly, with others (e.g. perhaps Facetime and Skype) only shown when the visitor identifies themselves (e.g. with IndieAuth). The non-public options could either be hidden, or perhaps shown disabled, and selecting them would be discoverable way to request the visitor identify themselves.

This is enough of a mockup to get started with the other building blocks so I'm going to stop there.

I've started a wiki page on "communication" and will be iterating on the mockups there.

Got other thoughts? Upload your mockups to indiewebcamp.com and add them to the communication page as well. Let's build on each other's ideas in a spirit of open source design.

http://tantek.com/2014/067/b2/mockups-people-focused-mobile-communication


K Lars Lohn: Redneck Broadband - fixed!

Воскресенье, 09 Марта 2014 г. 04:49 + в цитатник
the beginning of the story

It was my fault!  Monday's 28Mbps was not an anomaly. At one point in the installation, the WiFi hot spot crashed and I had to do a factory reset.  Little did I know that factory reset disables the 4G radio: our throughput dropped to an abysmal 200Kbps.  Once re-activated, 4G speeds came back and remain consistent.

After crowing success, then lamenting failure, I'm back to shouting "success!"

http://www.twobraids.com/2014/02/redneck-broadband-fixed.html


Tantek Celik: Building Blocks For People Focused Mobile Communication

Воскресенье, 09 Марта 2014 г. 02:58 + в цитатник

I'm at IndieWebCampSF and today, day 2, is "hack" or "create" day so I'm working on prototyping people focused mobile communication on my own website.

A few months ago I wrote about my frustrations with distracting app-centric communication interfaces, and how a people-focused mobile communication experience could not only solve that problem, but provide numerous other advantages as well.

Yesterday I led a discussion & brainstorming session on the subject, hashtagged #indiecomms, and it became clear that there were several pieces we needed to figure out:

  • Mockups for what it would look like
  • URLs for each communication service/app
  • Markup for the collections of links and labels
  • CSS for presenting it like the mockups
  • Logic for presence / availability for each service

So that's what I'm working on and will blog each building block as I get figure it out and create it.

http://tantek.com/2014/067/b1/building-blocks-people-focused-mobile-communication


Daniel Stenberg: HTTPbis design team meeting London

Воскресенье, 09 Марта 2014 г. 01:03 + в цитатник

I’m writing this just hours after the HTTPbis design team meeting in London 2014 has ended.

Around 30 people attended the meeting i Mozilla’s central London office. The fridge was filled up with drinks, the shelves were full of snacks and goodies. The day could begin. This is the Saturday after the IETF89 week so most people attending had already spent the whole or parts of the week before here in London doing other HTTP and network related work. The HTTPbis sessions at the IETF itself were productive and had already pushed us forward.

We started at 9:30 and we quickly got to work. Mark Nottingham guided us through the day with usual efficiency.

We all basically hang out in a huge room, some in chairs, some in sofas and a bunch of people on the floor or just standing up. We had mikes passed around and the http2 discussions were flowing back and forth depending on the topics and what people felt about them. Some of the issues that were nailed down this time and will end up detailed in the upcoming draft-11 are (strictly speaking, we only discussed the things and formed opinions, as by IETF guidelines we can’t decide things on an offline meeting like this):

  • Priorities of streams will have a dependency graph or direction, making individual  streams less or more important than other
  • A client can send headers without compression and tell the proxy that the header shouldn’t be compressed – used a way to mitigate some of the compression security problems
  • There will be no TLS renegotiation allowed mid-session. Basically a client will have to tear down the connection and negotiate again if suddenly a need to use a client certificate arises.
  • Alt-Svc is the way forward so ALTSVC will appear a new frame in draft-11. This is the way to signal to an application that there is another “route” tIMG_20140308_100453o the same content on the same server. This will allow for what is popularly known as “opportunistic encryption” or at least one sort of that. In short, you can do “plain-text” HTTP over a TLS connection using this…
  • We decided that a server should support gzip contents from clients

There were some other things too handled, but I believe those are the main changes. When the afternoon started to turn long, beers and other beverages were brought out and we did enjoy a relaxing social finale of the day before we split up in smaller groups and headed out in the busy London night to get dinner…

Thanks everyone for a great day. I also appreciated meeting several people in real-life I never met before, only discussed with and read emails from online and of course some old friends I hadn’t seen in a long time!

Oh, there’s also a new rough time frame for http2 going forward. Nearest in time would be the draft-11 at the end of March and another interim in the beginning of June (Boston?).

As a reminder, here’s what was happened for draft-10, and here is http2 draft-10.

Out of all people present today, I believe Mozilla was the company with the largest team (8 attendees) – funnily enough none of us Mozillians there actually work in this office or even in this country.

http://daniel.haxx.se/blog/2014/03/08/httpbis-design-team-meeting-london/


Konstantinos Antonakoglou: A Creative Commons music video made out of other CC videos

Суббота, 08 Марта 2014 г. 23:07 + в цитатник

Hello! Let’s go straight to the point. Here is the video:

…and here are the videos that were used having the Creative Commons Attribution licence: http://wonkydollandtheecho.com/thanks.html. They are downloadable via Vimeo, of course.

Videos available from NASA and the ALMA observatory were also used.

The video (not audio) is under the Creative Commons BY-NC-SA licence, which I think is quite reasonable since every scene used from the source videos (ok, almost every scene) has lyrics/graphics embedded on it.

I hope you like it! I didn’t have a lot of time to make this video but I like the result. The tools I used are not open source unfortunately, because the learning curve for these tools is quite steap. I will definitely try them in the future. Actually, I really haven’t come across any alternative to Adobe After Effects. You might say Blender…but is it really an alternative? Any thoughts?

PS. More news soon for the Sopler project (a web application for making to-do lists) and other things I’ve been working on lately (like MQTT-SN).


http://antonakoglou.com/2014/03/08/creative-commons-music-video-made-of-cc-videos/


Brendan Eich: MWC 2014, Firefox OS Success, and Yet More Web API Evolution

Суббота, 08 Марта 2014 г. 22:58 + в цитатник

Just over a week ago, I left Barcelona and Mobile World Congress 2014, where Mozilla had a huge third year with Firefox OS.

We announced the $25 Firefox OS smartphone with Spreadtrum Communications, targeting retail channels in emerging markets, and attracting operator interest to boot. This is an upgrade for those channels at about the same price as the feature phones selling there today. (Yes, $25 is the target end-user price.)

We showed the Firefox OS smartphone portfolio growing upward too, with more and higher-end devices from existing and new OEM partners. Peter Bright’s piece for Ars Technica is excellent and has nice pictures of all the new devices.

We also were pleased to relay the good news about official PhoneGap/Cordova support for Firefox OS.

We were above the fold for the third year in a row in Monday’s MWC daily.

(Check out the whole MWC 2014 photo set on MozillaEU’s Flickr.)

As I’ve noted before, our success in attracting partners is due in part to our ability to innovate and standardize the heretofore-missing APIs needed to build fully-capable smartphones and other devices purely from web standards. To uphold tradition, here is another update to my progress reports from last year and from 2012.


First, and not yet a historical curiosity: the still-open tracking bug asking for “New” Web APIs, filed at the dawn of B2G by Andreas Gal.

Next, links for “Really-New” APIs, most making progress in standards bodies:

Yet more APIs, some new enough that they are not ready for standardization:

Finally, the lists of new APIs in Firefox OS 1.1, 1.2, and 1.3:

This is how the web evolves: by implementors championing and testing extensions, with emerging consensus if at all possible, else in a pref-enabled or certified-app sandbox if there’s no better way. We thank colleagues at W3C and elsewhere who are collaborating with us to uplift the Web to include APIs for all the modern mobile device sensors and features. We invite all parties working on similar systems not yet aligned with the emerging standards to join us.

/be

https://brendaneich.com/2014/03/mwc-2014-firefox-os-success-and-yet-more-web-api-evolution/



Поиск сообщений в rss_planet_mozilla
Страницы: 472 ... 27 26 [25] 24 23 ..
.. 1 Календарь