-Поиск по дневнику

Поиск сообщений в rss_planet_mozilla

 -Подписка по e-mail

 

 -Постоянные читатели

 -Статистика

Статистика LiveInternet.ru: показано количество хитов и посетителей
Создан: 19.06.2007
Записей:
Комментариев:
Написано: 7

Planet Mozilla





Planet Mozilla - https://planet.mozilla.org/


Добавить любой RSS - источник (включая журнал LiveJournal) в свою ленту друзей вы можете на странице синдикации.

Исходная информация - http://planet.mozilla.org/.
Данный дневник сформирован из открытого RSS-источника по адресу http://planet.mozilla.org/rss20.xml, и дополняется в соответствии с дополнением данного источника. Он может не соответствовать содержимому оригинальной страницы. Трансляция создана автоматически по запросу читателей этой RSS ленты.
По всем вопросам о работе данного сервиса обращаться со страницы контактной информации.

[Обновить трансляцию]

Daniel Stenberg: Firefox OS Flatfish Bluedroid fix

Пятница, 29 Августа 2014 г. 16:11 + в цитатник

Hey, when I just built my own Firefox OS (b2g) image for my Firefox OS Tablet (flatfish) straight from the latest sources, I ran into this (known) problem:

Can't find necessary file(s) of Bluedroid in the backup-flatfish folder.
Please update the system image for supporting Bluedroid (Bug-986314),
so that the needed binary files can be extracted from your flatfish device.

So, as I struggled to figure out the exact instructions on how to proceed from this, I figured I should jot down what I did in the hopes that it perhaps will help a fellow hacker at some point:

  1. Download the 3 *.img files from the dropbox site that is referenced from bug 986314.
  2. Download the flash-flatfish.sh script from the same dropbox place
  3. Make sure you have ‘fastboot’ installed (I’m mentioning this here because it turned out I didn’t and yet I have already built and flashed my Flame phone successfully without having it). “apt-get install android-tools-fastboot” solved it for me. Note that if it isn’t installed, the flash-flatfish.sh script will claim that the device is not in fastboot mode and stop with an error message saying so.
  4. Finally: run the script “./flash-flatfish.sh [dir with the 3 .img files]“
  5. Once it has succeeded, the tablet reboots
  6. Remove the backup-flatfish directory in the build dir.
  7. Restart the flatfish build again and now it should get passed that Bluedroid nit

Enjoy!

http://daniel.haxx.se/blog/2014/08/29/flatfish-bluedroid-fix/


Wladimir Palant: Using a Firefox extension to work around Selenium WebDriver's limitations

Пятница, 29 Августа 2014 г. 11:38 + в цитатник

My Google search link fix extension had a bunch of regressions lately and I realized that testing its impact on the search pages manually isn’t working — these pages are more complicated than it looks like, and there are lots of configuration options affecting them. So I decided looking into Selenium WebDriver in order to write integration tests that would automate Firefox. All in all, writing the tests is fairly simple once you get used to the rather arcane API. However, the functionality seems to be geared towards very old browsers (think IE6) and some features are nowhere to be found.

One issue: there is no way to focus an element without clicking it. Clicking isn’t always an option, since it might trigger a link for example. That issue turned out to be solved fairly easily:

driver.execute_script("arguments[0].focus()", element)

The ability to pass elements as parameters to WebDriver.execute_script is very useful, so it is surprising that it doesn’t seem to be documented properly anywhere.

But what about working with tabs or middle-clicking links? It seems that tabbed browsing wasn’t invented yet back when that API was designed, so it only has a concept of windows — not very useful. So WebDriver will only let you work with the currently selected tab, inactive tabs are off limits. And WebDriver.execute_script isn’t any help here either, it won’t let you run privileged code.

After briefly considering using send_keys functionality to open Web Console on about:config and typing code into it (yes, it looks like that would actually work), I decided to go with a less crazy solution: install an additional extension to implement the necessary functionality. So if a test wants the element to be middle-clicked it can trigger a custom event:

driver.execute_script('''
  var event = document.createEvent("Events");
  event.initEvent("testhelper_middleclick", true, false);
  arguments[0].dispatchEvent(event);
''', element)

And the extension listens to that event:

window.gBrowser.addEventListener("testhelper_middleclick", function(event)
{
  let utils = event.target.ownerDocument.defaultView
                   .QueryInterface(Ci.nsIInterfaceRequestor)
                   .getInterface(Ci.nsIDOMWindowUtils);
  let rect = event.target.getBoundingClientRect();
  utils.sendMouseEvent("mousedown", rect.left + 1, rect.top + 1, 1, 1, 0);
  utils.sendMouseEvent("mouseup", rect.left + 1, rect.top + 1, 1, 1, 0);
}, false, true);

This works nicely, but what if you want to get data back? For example, I want to know which URLs were requested at the top level — in particular, whether there was a redirect before the final URL. Selenium only allows you to get notified of URL changes that were initiated by Selenium itself (not very helpful) or poll driver.current_url (doesn’t work). The solution is to have the extension register a progress listener and write all URLs seen to the Browser Console:

window.gBrowser.addTabsProgressListener({
  onStateChange: function(browser, webProgress, request, flags, status)
  {
    if (!(flags & Ci.nsIWebProgressListener.STATE_IS_WINDOW))
      return;
    if (!(flags & Ci.nsIWebProgressListener.STATE_START) && !(flags & Ci.nsIWebProgressListener.STATE_REDIRECTING))
      return;
    if (request instanceof Ci.nsIChannel)
      Cu.reportError("[testhelper] Loading: " + request.URI.spec);
  }
});

You can use driver.get_log("browser") to retrieve the full list of console messages. Each message also has a timestamp which allows for example only extracting the URLs seen after the previous check.

Side-note: I first considered using MozMill for this. However, it is geared very much towards Firefox development and much of the Selenium functionality would have to be reimplemented (locating installed Firefox instance, default Firefox preferences for a test profile, dismissing alerts on web pages and so on).

https://palant.de/2014/08/29/using-a-firefox-extension-to-work-around-selenium-webdriver-s-limitations


Priyanka Nag: Maker Party gets grander in Pune this time

Четверг, 28 Августа 2014 г. 22:39 + в цитатник
While going through my twitter time-line this evening, I noticed Michelle Thorne's tweet stating that India leads with most Maker Party action this season.
Well, who doubts that! In India, we have a maker parties being organized almost every second day. My facebook wall and twitter timelime is like overloaded with posts, photos and updates from all the Maker Parties happening around me.

Maker Party, Pune


Well, if you are still not aware of this one, we are having the grand daddy of this maker parties in Pune on the 6th of September 2014. The executive director of Mozilla Foundation, Mark Surman, is going to be personally present for this event. Just like all maker parties, this event is an attempt to map and empower a community of educators and creative people who share a passion to innovate, evolve and change the learning landscape.

A few quick updates about this event:
  •  Event date - 6th and 7th September
  •  Event venue - SICSR, Model Colony, Pune
  • Rough agenda for the event is going to be:
    • 6th September 2014 (Day 1) 
      • 10am - 11am : Mozilla introduction
      • 11am - 12 : About Hive initiative
      •  12 - 1pm: Rohit Lalwani - Entrepreneurship talk
      •  1-2pm : Lunch break
      •  2pm - 3pm: Webmaker begins with Appmaker
      •  3pm - 4pm: Webmaker continues with Thimble
      •  4pm - 4.45pm: Webmaker continues with Popcorn
      •  4.45pm - 5.30pm : Webmaker continues with x-ray goggles
      • 5.30pm - 6pm: Prize distribution (against best makes of the day etc). Science fair also ends
      • 6pm - 7pm : Birds of feature
      • 7pm : Dinner (venue - TBD)
Science fair will be from 12 noon to 6pm.
  
    •  7th September 2014 (Day 2) 
      • 1st Half: Community Meetup and Discussions on the future roadmap for Hive India,
        Long term partnership prospect meeting with partners.
      •  2nd Half: Community training sessions on Hive and Train the trainer events.
 
For this event, we are having a variety of different training sessions, workshops and science displays - starting from 3D printing to wood-works, Origami to quad-copter flying and even film making.

If you have still not registered for this event, heres your chance:


Loading...

http://priyankaivy.blogspot.com/2014/08/maker-party-gets-grander-in-pune-this.html


Daniel Stenberg: Going to FOSDEM 2015

Среда, 27 Августа 2014 г. 13:01 + в цитатник

Yeps,

I’m going there and I know several friends are going too, so this is just my way of pointing this out to the ones of you who still haven’t made up your mind! There’s still a lot of time left as this event is taking place late January next year.

I intend to try to get a talk to present this time and I would love to meet up with more curl contributors and fans.

fosdem

http://daniel.haxx.se/blog/2014/08/27/going-to-fosdem-2015/


Byron Jones: happy bmo push day!

Среда, 27 Августа 2014 г. 11:11 + в цитатник

the following changes have been pushed to bugzilla.mozilla.org:

  • [1058479] move the “mozilla employees” warning on bugzilla::admin next to the submit button
  • [1058481] git commits should link to commitdiff not commit
  • [1056087] contrib/merge-users.pl fails if there are no duplicate bug_user_last_visit rows
  • [1058679] new bug API returning a ref where bzexport expects bug data
  • [1057774] bzAPI landing page gives a 404
  • [1056904] Add “Mentored by me” to MyDashboard
  • [1059085] Unable to update a product’s group controls: Can’t use string (“table”) as an ARRAY ref while “strict refs” in use
  • [1059088] Inline history can be shown out-of-order when two changes occur in the same second

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

http://globau.wordpress.com/2014/08/27/happy-bmo-push-day-110/


Monica Chew: Firefox 32 supports Public Key Pinning

Среда, 27 Августа 2014 г. 05:41 + в цитатник
Public Key Pinning helps ensure that people are connecting to the sites they intend. Pinning allows site operators to specify which certificate authorities (CAs) issue valid certificates for them, rather than accepting any one of the hundreds of built-in root certificates that ship with Firefox. If any certificate in the verified certificate chain corresponds to one of the known good certificates, Firefox displays the lock icon as normal.

Pinning helps protect users from man-in-the-middle-attacks and rogue certificate authorities. When the root cert for a pinned site does not match one of the known good CAs, Firefox will reject the connection with a pinning error. This type of error can also occur if a CA mis-issues a certificate.

Pinning errors can be transient. For example, if a person is signing into WiFi, they may see an error like the one below when visiting a pinned site. The error should disappear if the person reloads after the WiFi access is setup.



Firefox 32 and above supports built-in pins, which means that the list of acceptable certificate authorities must be set at time of build for each pinned domain. Pinning is enforced by default. Sites may advertise their support for pinning with the Public Key Pinning Extension for HTTP, which we hope to implement soon. Pinned domains include addons.mozilla.org and Twitter in Firefox 32, and Google domains in Firefox 33, with more domains to come. That means that Firefox users can visit Mozilla, Twitter and Google domains more safely. For the full list of pinned domains and rollout status, please see the Public Key Pinning wiki.

Thanks to Camilo Viecco for the initial implementation and David Keeler for many reviews!

http://monica-at-mozilla.blogspot.com/2014/08/firefox-32-supports-public-key-pinning.html


Gervase Markham: Email Account Phishers Do Manual Work

Вторник, 26 Августа 2014 г. 23:37 + в цитатник

For a while now, criminals have been breaking into email accounts and using them to spam the account’s address book with phishing emails or the like. More evil criminals will change the account password, and/or delete the address book and the email to make it harder for the account owner to warn people about what’s happened.

My mother recently received an email, purportedly from my cousin’s husband, titled “Confidential Doc”. It was a mock-up of a Dropbox “I’ve shared an item with you” email, with the “View Document” URL actually being http://proshow.kz/excel/OLE/PPS/redirect.php. This (currently) redirects to http://www.affordablewebdesigner.co.uk/components/com_wrapper/views/wrapper/tmpl/dropbox/, although it redirected to another site at the time. That page says “Select your email provider”, explaining “Now, you can sign in to dropbox with your email”. When you click the name of your email provider, it asks you for your email address and password. And boom – they have another account to abuse.

But the really interesting thing was that my mother, not being born yesterday, emailed back saying “I’ve just received an email from you. But it has no text – just an item to share. Is it real, or have you been hacked?” So far, so cautious. But she actually got a reply! It said:

Hi ,
I sent it, It is safe.

(The random capital was in the original.)

Now, this could have been a very smart templated autoresponder, but I think it’s more likely that the guy stayed logged into the account long enough to “reassure” people and to improve his hit rate. That might tell us interesting things about the value of a captured email account, if it’s worth spending manual effort trying to convince people to hand over their creds.

http://feedproxy.google.com/~r/HackingForChrist/~3/fkIc1eCIr3w/


Alex Vincent: An insightful statement from a mathematics course

Вторник, 26 Августа 2014 г. 19:39 + в цитатник

I’m taking a Linear Algebra course this fall.  Last night, my instructor said something quite interesting:

“We are building a model of Euclidean geometry in our vector space. Then we can prove our axioms of geometry (as theorems).”

This would sound like technobabble to me even a week ago, but what he’s really saying is this:

“If you can implement one system’s basic rules or axioms in another system, you can build a model of that first system in the second.”

Programmers and website builders build models of systems all the time, and unconsciously, we build on top of other systems. Think about that when you write JavaScript code: the people who implement JavaScript engines are building a model for millions of people to use that they’ll never meet. I suppose the same could be said of any modern programming language, compiler, transpiler or interpreter.

The beauty for those of us who work in the model is that we (theoretically) shouldn’t need to care what platform we run on. (In practice, there are differences, which is why we want platforms to implement standards, so we can concentrate on using the theoretical model we depend on.)

On the flip side, that also means that building and maintaining that fundamental system we build on top of has to be done very, very carefully.  If you’re building something for others to use (and chances are, when you’re writing software, you’re doing exactly that), you really have to think about how you want others to use your system, and how others might try to use your system in ways you don’t expect.

It’s really quite a profound duty that we take on when we craft software for others to use.

https://alexvincent.us/blog/?p=830


Chris AtLee: Gotta Cache 'Em All

Вторник, 26 Августа 2014 г. 18:21 + в цитатник

TOO MUCH TRAFFIC!!!!

Waaaaaaay back in February we identified overall network bandwidth as a cause of job failures on TBPL. We were pushing too much traffic over our VPN link between Mozilla's datacentre and AWS. Since then we've been working on a few approaches to cope with the increased traffic while at the same time reducing our overall network load. Most recently we've deployed HTTP caches inside each AWS region.

Network traffic from January to August 2014

The answer - cache all the things!

Obligatory XKCD

Caching build artifacts

The primary target for caching was downloads of build/test/symbol packages by test machines from file servers. These packages are generated by the build machines and uploaded to various file servers. The same packages are then downloaded many times by different machines running tests. This was a perfect candidate for caching, since the same files were being requested by many different hosts in a relatively short timespan.

Caching tooltool downloads

Tooltool is a simple system RelEng uses to distribute static assets to build/test machines. While the machines do maintain a local cache of files, the caches are often empty because the machines are newly created in AWS. Having the files in local HTTP caches speeds up transfer times and decreases network load.

Results so far - 50% decrease in bandwidth

Initial deployment was completed on August 8th (end of week 32 of 2014). You can see by the graph above that we've cut our bandwidth by about 50%!

What's next?

There are a few more low hanging fruit for caching. We have internal pypi repositories that could benefit from caches. There's a long tail of other miscellaneous downloads that could be cached as well.

There are other improvements we can make to reduce bandwidth as well, such as moving uploads from build machines to be outside the VPN tunnel, or perhaps to S3 directly. Additionally, a big source of network traffic is doing signing of various packages (gpg signatures, MAR files, etc.). We're looking at ways to do that more efficiently. I'd love to investigate more efficient ways of compressing or transferring build artifacts overall; there is a ton of duplication between the build and test packages between different platforms and even between different pushes.

I want to know MOAR!

Great! As always, all our work has been tracked in a bug, and worked out in the open. The bug for this project is 1017759. The source code lives in https://github.com/mozilla/build-proxxy/, and we have some basic documentation available on our wiki. If this kind of work excites you, we're hiring!

Big thanks to George Miroshnykov for his work on developing proxxy.

http://atlee.ca/blog/posts/cache-em-all.html


Byron Jones: happy bmo push day!

Вторник, 26 Августа 2014 г. 11:49 + в цитатник

the following changes have been pushed to bugzilla.mozilla.org:

  • [1058274] The input field for suggested reviewers when editing a component needs ‘multiple’ to be true for allowing for more than one username
  • [1051655] mentor field updated/reset when a bug is updated as a result of a change on a different bug (eg. see also, duplicate)
  • [1058355] bugzilla.mozilla.org leaks emails to logged out users in “Latest Activity” search URLs

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

http://globau.wordpress.com/2014/08/26/happy-bmo-push-day-109/


Daniel Stenberg: My home setup

Понедельник, 25 Августа 2014 г. 10:57 + в цитатник

I work in my home office which is upstairs in my house, perhaps 20 steps from my kitchen and the coffee refill. I have a largish desk with room for a number of computers. The photo below shows the three meter beauty. My two kids have their two machines on the left side while I use the right side of it for my desktop and laptop.

Daniel's home office

Many computers

The kids use my old desktop computer with a 20'' Dell screen and my old 15.6'' dual-core Asus laptop. My wife has her laptop downstairs and we have a permanent computer installed underneath the TV for media (an Asus VivoPC).

My desktop computer

I’m primarily developing C and C++ code and I’m frequently compiling rather large projects – repeatedly. I use a desktop machine for my ordinary development, equipped with a fairly powerful 3.5GHz quad-core Core-I7 CPU, I have my OS, my home dir and all source code put on an SSD. I have a larger HDD for larger and slower content. With ccache and friends, this baby can build Firefox really fast. I put my machine together from parts myself as I couldn’t find a suitable one focused on horse power but yet a “normal” 2D graphics card that works Fractal Designfine with Linux. I use a Radeon HD 5450 based ASUS card, which works fine with fully open source drivers.

I have two basic 24 inch LCD monitors (Benq and Dell) both using 1920x1200 resolution. I like having lots of windows up, nothing runs full-screen. I use KDE as desktop and I edit everything in Emacs. Firefox is my primary browser. I don’t shut down this machine, it runs a few simple servers for private purposes.

My machines (and my kids’) all run Debian Linux, typically of the unstable flavor allowing me to get new code reasonably fast.

Func KB-460 keyboardMy desktop keyboard is a Func KB-460, mechanical keyboard with some funky extra candy such as red backlight and two USB ports. Both my keyboard and my mouse are wired, not wireless, to take away the need for batteries or recharging etc in this environment. My mouse is a basic and old Logitech MX 310.

I have a crufty old USB headset with a mic, that works fine for hangouts and listening to music when the rest of the family is home. I have Logitech webcam thing sitting on the screen too, but I hardly ever use it for anything.

When on the move

I need to sometimes move around and work from other places. Going to conferences or even our regular Mozilla work weeks. Hence I also have a laptop that is powerful enough to build Firefox is a sane amount of time. I have Lenovo Thinkpad w540a Lenovo Thinkpad W540 with a 2.7GHz quad-core Core-I7, 16GB of RAM and 512GB of SSD. It has the most annoying touch pad on it. I don’t’ like that it doesn’t have the explicit buttons so for example both-clicking (to simulate a middle-click) like when pasting text in X11 is virtually impossible.

On this machine I also run a VM with win7 installed and associated development environment so I can build and debug Firefox for Windows on it.

I have a second portable. A small and lightweight netbook, an Eeepc S101, 10.1'' that I’ve been using when I go and just do presentations at places but recently I’ve started to simply use my primary laptop even for those occasions – primarily because it is too slow to do anything else on.

I do video conferences a couple of times a week and we use Vidyo for that. Its Linux client is shaky to say the least, so I tend to use my Nexus 7 tablet for it since the Vidyo app at least works decently on that. It also allows me to quite easily change location when it turns necessary, which it sometimes does since my meetings tend to occur in the evenings and then there’s also varying amounts of “family activities” going on!

Backup

For backup, I have a Synology NAS equipped with 2TB of disk in a RAIDSynology DS211j stashed downstairs, on the wired in-house gigabit ethernet. I run an rsync job every night that syncs the important stuff to the NAS and I run a second rsync that also mirrors relevant data over to a friends house just in case something terribly bad would go down. My NAS backup has already saved me really good at least once.

Printer

HP Officejet 8500ANext to the NAS downstairs is the house printer, also attached to the gigabit even if it has a wifi interface of its own. I just like increasing reliability to have the “fixed services” in the house on wired network.

The printer also has scanning capability which actually has come handy several times. The thing works nicely from my Linux machines as well as my wife’s windows laptop.

Internet

fiber cableI have fiber going directly into my house. It is still “just” a 100/100 connection in the other end of the fiber since at the time I installed this they didn’t yet have equipment to deliver beyond 100 megabit in my area. I’m sure I’ll upgrade this to something more impressive in the future but this is a pretty snappy connection already. I also have just a few milliseconds latency to my primary servers.

Having the fast uplink is perfect for doing good remote backups.

Router  and wifi

dlink DIR 635I have a lowly D-Link DIR 635 router and wifi access point providing wifi for the 2.4GHz and 5GHz bands and gigabit speed on the wired side. It was dead cheap it just works. It NATs my traffic and port forwards some ports through to my desktop machine.

The router itself can also update the dyndns info which ultimately allows me to use a fixed name to my home machine even without a fixed ip.

Frequent Wifi users in the household include my wife’s laptop, the TV computer and all our phones and tablets.

Telephony

Ping Communication Voice Catcher 201EWhen I installed the fiber I gave up the copper connection to my home and since then I use IP telephony for the “land line”. Basically a little box that translates IP to old phone tech and I keep using my old DECT phone. We basically only have our parents that still call this number and it has been useful to have the kids use this for outgoing calls up until they’ve gotten their own mobile phones to use.

It doesn’t cost very much, but the usage is dropping over time so I guess we’ll just give it up one of these days.

Mobile phones and tablets

I have a Nexus 5 as my daily phone. I also have a Nexus 7 and Nexus 10 that tend to be used by the kids mostly.

I have two Firefox OS devices for development/work.

http://daniel.haxx.se/blog/2014/08/25/my-home-setup/


Kaustav Das Modak: Dear smartphone user, it is time to unlearn

Понедельник, 25 Августа 2014 г. 09:23 + в цитатник
Dear smartphone user, You have been used to sophisticated features and cluttered interfaces for a long time. Remember those days when you had used a smartphone for the first time? Do you recollect that extra cognitive overload you had to face to figure out what each gesture does? Why were there so many round and […]

http://kaustavdm.in/2014/08/dear-smartphone-user-time-unlearn.html?utm_source=rss&utm_medium=rss&utm_campaign=dear-smartphone-user-time-unlearn


Zack Weinberg: The literary merit of right-wing SF

Понедельник, 25 Августа 2014 г. 07:44 + в цитатник

The results are in for the 2014 Hugo Awards. I’m pleased with the results in the fiction categories—a little sad that “The Waiting Stars” didn’t win its category, but it is the sort of thing that would not be to everyone’s taste.

Now that it’s all over, people are chewing over the politics of this year’s shortlist, particularly the infamous “sad puppy” slate, over on John Scalzi’s blog, and this was going to be a comment there, but I don’t seem to be able to post comments there, so y’all get the expanded version here instead. I’m responding particularly to this sentiment, which I believe accurately characterizes the motivation behind Larry Correia’s original posting of his slate, and the motivations of those who might have voted for it:

I too am someone who likes, and dislikes, works from both groups of authors. However, only one group ever gets awards. The issue is not that you cannot like both groups, but that good works from the PC crowd get rewarded and while those from authors that have been labeled “unacceptable” are shunned, and that this happens so regularly, and with such predictability that it is obviously not just quality being rewarded.

- “BrowncoatJeff

I cannot speak to the track record, not having followed genre awards closely in the past. But as to this year’s Hugo shortlist, it is my considered opinion that all the works I voted below No Award (except The Wheel of Time, whose position on my ballot expresses an objection to the eligibility rules) suffer from concrete, objective flaws on the level of basic storytelling craft, severe enough that they did not deserve a nomination. This happens to include Correia’s own novels, and all the other works of fiction from his slate that made the shortlist. Below the fold, I shall elaborate.

(If you’re not on board with the premise that there is such a thing as objective (observer-independent) quality in a work of art, and that observers can evaluate that independently from whether a work suits their own taste or agrees with their own politics, you should probably stop reading now. Note that this is not the same as saying that I think all Hugo voters should vote according to a work’s objective quality. I am perfectly fine with, for instance, the people who voted “Opera Vita Aeterna” below No Award without even cracking it open—those people are saying “Vox Day is such a despicable person that no matter what his literary skills are, he should not receive an award for them” and that is a legitimate critical stance. It is simply not the critical stance I am taking right now.)

Let me first show you the basic principles of storytelling craft that I found lacking. I did not invent them; similar sentiments can be found in, for instance, “Fenimore Cooper’s Literary Offenses,” the Turkey City Lexicon, Ursula LeGuin’s Steering the Craft, Robert Schroeck’s A Fanfic Writer’s Guide To Writing, and Aristotle’s Poetics. This formulation, however, is my own.

  1. Above all, a story must not be boring. The reader should care, both about “what happens to these people,” and about the ultimate resolution to the plot.
  2. Stories should not confuse their readers, and should enable readers to anticipate—but not perfectly predict—the consequences of each event.
  3. The description, speech, and actions of each character in a story should draw a clear, consistent picture of that character’s personality and motivations, sufficient for the reader to anticipate their behavior in response to the plot.
  4. Much like music, stories should exhibit dynamic range in their pacing, dramatic tension, emotional color, and so forth; not for nothing is “monotony” a synonym for “tedium.”
  5. Style, language, and diction should be consistent with the tone and content of the story.
  6. Rules 2–5 can be broken in the name of Art, but doing so demands additional effort and trust from the reader, who should, by the end of the story, believe that it was worth it.

With that in hand, I shall now re-review the works that didn’t deserve (IMNSHO) to make the shortlist, in order from most to least execrable.

Opera Vita Aeterna

This is textbook bad writing. The most obvious problem is the padded, purple, monotonously purple prose, which obviously fails point 4, and less obviously fails point 5 because the content isn’t sufficiently sophisticated to warrant the style. The superficial flaws of writing are so severe that it’s hard to see past them, but if you do, you discover that it fails all the other points as well, simply because there wasn’t enough room, underneath all of those purple words, for an actual plot. It’s as if you tried to build a building entirely out of elaborate surface decorations, without first putting up any sort of structural skeleton.

The Butcher of Khardov and Meathouse Man

These are both character studies, which is a difficult mode: if you’re going to spend all of your time exploring one character’s personality, you’d better make that one character interesting, and ideally also fun to be around. In these cases, the authors were trying for tragically flawed antiheroes and overdid the anti-, producing characters who are nothing but flaw. Their failures are predictable; their manpain, tedious; their ultimate fates, banal. It does not help that they are, in many ways, the same extruded antihero product that Hollywood and the comic books have been foisting on us for going on two decades now, just taken up to 11.

Khardov also fails on point 2, being told out of order for no apparent reason, causing the ending to make no sense. Specifically, I have no idea whether the wild-man-in-the-forest scenes are supposed to occur before or after the climactic confrontation with the queen, and the resolution is completely different depending on which way you read it.

Meathouse Man was not on Correia’s slate. It’s a graphic novel adaptation of a story written in the 1970s, and it makes a nice example of point 6. When it was originally written, a story with a completely unlikable protagonist, who takes exactly the wrong lessons from the school of hard knocks and thus develops from a moderate loser into a complete asshole, would perhaps have been … not a breath of fresh air, but a cold glass of water in the face, perhaps. Now, however, it is nothing we haven’t seen done ten billion times, and we are no longer entertained.

The Chaplain’s Legacy and The Exchange Officers

These are told competently, with appropriate use of language, credible series of events, and so on. The plots, however, are formula, the characters are flat, the ideas are not original, and two months after I read them, I’m hard pressed to remember enough about them to criticize!

I may be being more harsh on Torgerson than the median voter, because I have read Enemy Mine and so I recognize The Chaplain’s Legacy as a retread. (DOES NO ONE READ THE CLASSICS?!) Similarly, The Exchange Officers is prefigured by hundreds of works featuring the Space Marines. I don’t recall seeing remotely piloted mecha before, but mecha themselves are clich'e, and the “remotely piloted” part sucks most of the suspense out of the battle scenes, which is probably why it hasn’t been done.

The Grimnoir Chronicles

Correia’s own work, this falls just short of good, but in a way that is more disappointing than if it had been dull and clich'ed. Correia clearly knows how to write a story that satisfies all of the basic storytelling principles I listed. He is never dull. He comes up with interesting plots and gets the reader invested in their outcome. He’s good at set pieces; I can still clearly envision the giant monster terrorizing Washington DC. He manages dramatic tension effectively, and has an appropriate balance between gripping suspense and calm quiet moments. And he is capable of writing three-dimensional, nuanced, plausibly motivated, sympathetic characters.

It’s just that the only such character in these novels is the principal villain.

This is not to say that all of the other characters are flat or uninteresting; Sullivan, Faye, and Francis are all credible, and most of the other characters have their moments. Still, it’s the Chairman, and only the Chairman, who is developed to the point where the reader feels fully able to appreciate his motivations and choices. I do not say sympathize; the man is the leader of Imperial Japan circa 1937, and Correia does not paper over the atrocities of that period—but he does provide more justification for them than anyone had in real life. There really is a cosmic horror incoming, and the Chairman really does think this is the only way to stop it. And that makes for the best sort of villain, provided you give the heroes the same depth of characterization. Instead, as I said last time, the other characters are all by habit unpleasant, petty, self-absorbed, and incapable of empathizing with people who don’t share their circumstances. One winds up hoping for enough reverses to take them down a peg. (Which does not happen.)

Conclusion

Looking back, does any of that have anything to do with any of the authors’ political stances, either in the real world, or as expressed in their fiction? Not directly, but I do see a common thread which can be interpreted to shed some light on why “works from the PC crowd” may appear to be winning a disproportionate number of awards, if you are the sort of person who uses the term “PC” unironically. It’s most obvious in the Correia, being the principal flaw in that work, but it’s present in all the above.

See, I don’t think Correia realized he’d written all of his Good Guys as unpleasant, petty, and self-absorbed. I think he unconsciously assumed they didn’t need the same depth of character as the villain did, because of course the audience is on the side of the Good Guys, and you can tell who the Good Guys are from their costumes (figuratively speaking). It didn’t register on him, for instance, that a captain of industry who’s personally unaffected by the Great Depression is maybe going to come off as greedy, not to mention oblivious, for disliking Franklin Delano Roosevelt and his policies, even if the specific policy FDR was espousing on stage was a genuinely bad idea because of its plot consequences. In fact, that particular subplot felt like the author had his thumb on the scale to make FDR look bad—but the exact same subplot could have been run without giving any such impression, if the characterization had been more thorough. So if you care about characterization, you’re not likely to care for Correia’s work or anything like it. Certainly not enough to shortlist it for an award honoring the very best the genre has to offer.

Now, from out here on my perch safely beyond the Overton window, “politically correct,” to the extent it isn’t a vacuous pejorative, means “something which jars the speaker out of his daydream of the lily-white suburban 1950s of America (possibly translated to outer space), where everything was pleasant.” (And I do mean his.) Thing is, that suburban daydream is, still, 60 years later, in many ways the default setting for fiction written originally in English. Thanks to a reasonably well-understood bug in human cognition, it takes more effort to write fiction which avoids that default. It requires constant attention to ensure that presuppositions and details from that default are not slipping back in. And most of that extra effort goes into—characterization. It takes only a couple sentences to state that your story is set in the distant future Imperium of Man, in which women and men alike may serve in any position in the military and are considered completely equal; it takes constant vigilance over the course of the entire novel to make sure that you don’t have the men in the Imperial Marines taking extra risks to protect from enemy fire those of their fellow grunts who happen to be women. Here’s another, longer example illustrating how much work can be involved.

Therefore, it seems to me that the particular type of bad characterization I disliked in the above works—writing characters who, for concrete in-universe reasons, are unlikable people, and then expecting the audience to cheer them on anyway because they’ve been dressed up in These Are The Heroes costumes—is less likely to occur in writing that would get labeled “works from the PC crowd.” The authors of such works are already putting extra effort into the characterization, and are therefore less likely to neglect to write heroes who are, on the whole, likable people whom the reader wishes to see succeed.

https://www.owlfolio.org/fiction/the-literary-merit-of-right-wing-sf/


Clint Talbert: The Odyssey of Per-Push, On-Phone Firefox OS Automation

Суббота, 23 Августа 2014 г. 03:19 + в цитатник

When we started automating tests for Firefox OS, we knew that we could do a lot with automated testing on phone emulators–we could run in a very similar environment to the phone, using the same low level instruction set, even do some basic operations like SMS between two emulator processes. Best of all, we could run those in the cloud, at massive scale.

But, we also knew that emulator based automation wasn’t ever going to be as complete as actually testing on real phones. For instance, you can’t simulate many basic smart phone operations: calling a number, going to voice-mail, toggling airplane mode, taking a picture, etc. So, we started trying to get phones running in automation very early with Firefox OS, almost two years ago now.

We had some of our very early Unagi phones up and running on a desk in our office. That eventually grew to a second generation of Hamachi based phones. There were a couple of core scalability problems with both of these solutions:

  1. No reliable way to power-cycle a phone without a human walking up to it, pulling out the battery and putting it back in
  2. At the time these were pre-production phones (hence the code names), and were hard to get in bulk from partners. So, we did what we could with about 10 phones that ran smoketests, correctness tests, and performance tests.
  3. All of the automation jobs and results had to be tracked by hand. And status had to be emailed to developers — there was no way to get these reporting to our main automation dashboard, TBPL.
  4. Because we couldn’t report status to TBPL, maintaining the system and filing bugs when tests failed had to be done entirely by a dedicated set of 4 QA folk–not a scalable option, to say the least.

Because of points 1 and 2, we were unable to truly scale the number of devices. We only had one person in Mountain View, and what we had thought of as a part time job of pulling phone batteries soon became his full time job. We needed a better solution to increase the number of devices while we worked in parallel to create a better dashboard for our automation that would allow a system like this to easily plug in and report its results.

The Flame reference device solved that first problem. Now, we had a phone whose hardware we could depend on, and Jon Hylands was able to create a custom battery harnesses for it so that we could instruct our scripts to automatically detect dead phones and remotely power cycle them (and in the future, monitor power consumption). Because we (Mozilla) commissioned the Flame phone ourselves, there were no partner related issues with obtaining pre-production devices–we could easily get as many as we needed. After doing some math to understand our capacity needs, we got 40 phones to seed our prototype lab to support per-push automation.

As I mentioned, we were solving the dashboard problem in parallel, and that has now been deployed in the form of Treeherder, which will be the replacement for TBPL. That solves point 3. All that now remains is point 4. We have been hard at work on crafting a unified harness to run the Gaia Javascript tests on device which will also allow us to run the older, existing python tests as well until they can be converted. This gives us the most flexibility and allows us to take advantage of all the automation goodies in the existing python harness–like crash detection, JSON structured logging, etc. Once it is complete, we will be able to run a smaller set of the same tests the developers run locally per each push to b2g-inbound on these Flame devices in our lab. This means that when something breaks, it will break tests that are well known, in a well understood environment, and we can work alongside the developers to understand what broke and why. By enabling the developers and QA to work alongside one another, we eliminate the scaling problem in point 4.

It’s been a very long road to get from zero to where we are today. You can see the early pictures of the “phones on a desk” rack and pictures of the first 20 Flames from Stephen’s presentation he gave earlier this month.

A number of teams helped get us to this point, and it could not have been done without the cooperation among them: the A*Team, the Firefox OS Performance team, the QA team, and the Gaia team all helped get us to where we are today. You can see the per-push tests showing up on the Treeherder Staging site as we ensure we can meet the stability and load requirements necessary for running in production.

Last week, James Lal and his new team inherited this project. They are working hard to push the last pieces to completion as well as expanding it even further. And so, even though Firefox OS has had real phone automation for years, that system is now coming into its own. The real-phone automation will finally be extremely visible and easily actionable for all developers, which is a huge win for everyone involved.

http://clinttalbert.com/2014/08/22/the-odyssey-of-per-push-on-phone-firefox-os-automation/


Eric Shepherd: The Sheppy Report: August 22, 2014

Суббота, 23 Августа 2014 г. 02:16 + в цитатник

This week looks slower than usual when you look at this list, but the week involved a lot of research.

What I did this week

  • Reviewed and made (very) minor tweaks to Chris Mills’s doc plan for the Gaia web components and QA documentation.
  • Created an initial stub of a page for the canvas documentation plan.
  • Spent the weekend and a bit of Monday getting my broken server, including this blog, back up and running after a not-entirely-successful (at first) upgrade of the server from OS X 10.6.8 Server to 10.9.4. But most things are working now. I’ll get the rest fixed up over the next few days.
  • Pursued the MDN inbox project, trying to wrap it up.
    • Asked for feedback on the current state of things.
    • Added a subtle background color to the background of pages in the Inbox.
  • Started discussions on dev-mdc and staff mailing list about the documentation process; we’re going to get this thing straightened up and organized.
  • Filed bug 1056026 proposing that the Firefox_for_developers macro be updated to list both newer and older versions of Firefox.
  • Redirected some obsolete pages to their newer, replacement, content in the MDN meta-documentation.
  • Created a Hacker News account and upvoted a post about Hacks on canuckistani’s request.
  • Updated the MDN Administration Guide.
  • Installed various packages and add-ons on my Mac and server in preparation for testing WebRTC code.
  • Forked several WebRTC projects from GitHub to experiment with.
  • Found (after a surprisingly length search) a micro-USB cable so I could charge and update my Geeksphone Peak to Firefox OS 2.0's latest nightly build.
  • Re-established contact with Piotr at CKSource about continuing work to get our editor updated and improved.
  • Removed a mess of junk from a page in pt-BR; looks like someone used an editor that added a bunch of extra s.
  • Successfully tested a WebRTC connection between my Firefox OS phone and my iMac, using my Mac mini as server. Now I should be ready to start writing code of my own, now that I know it all works!
  • Filed bug 1057546: we should IMHO strip HTML tags that aren’t part of a string from within a macro call; this would prevent unfortunate errors.
  • Filed bug 1057547 proposing that the editor be updated to detect uses of the style attribute and of undefined classes, and present warnings to the user when they do so.
  • Fixed a page that was incorrectly translated in place, and emailed the contributor a reminder to be careful in the future.

Meetings attended this week

Monday

  • MDN dev team meeting on security and improved processes to prevent problems like the email address disclosure we just had happen.
  • MDN developer triage meeting.

Tuesday

  • Developer Engagement weekly meeting.
  • 1:1 with Jean-Yves Perrier.

Wednesday

  • 1:1 with Ali.

 Thursday

  • Writers’ staff meeting.

Friday

  • #mdndev weekly review meeting.
  • MDN bug swat meeting.
  • Web API documentation meeting.

So… it was a wildly varied day today. But I got a lot of interesting things done.

http://www.bitstampede.com/2014/08/22/the-sheppy-report-august-22-2014/


Gervase Markham: HSBC Weakens Their Internet Banking Security

Пятница, 22 Августа 2014 г. 21:30 + в цитатник

From a recent email about “changes to your terms and conditions”. (“Secure Key” is their dedicated keyfob 2-factor solution; it’s currently required both to log in and to pay a new payee. It’s rather well done.)

These changes will also enable us to introduce some enhancements to our service over the coming months. You’ll still have access to the full Internet Banking service by logging on with your Secure Key, but in addition, you’ll also be able log in to a limited service when you don’t use your Secure Key – you’ll simply need to verify your identity by providing other security information we request. We’ll contact you again to let you know when this new feature becomes available to you.

Full details of all the changes can be found below which you should read carefully. If you choose not to accept the changes, you have the right to ask us to stop providing you with the [Personal Internet Banking] service, before they come into effect. If we don’t hear from you, we’ll assume that you accept the changes.

Translation: we are lowering the security we use to protect your account information from unauthorised viewing and, as long as you still want to be able to access your account online at all, there’s absolutely nothing you can do about it.

http://feedproxy.google.com/~r/HackingForChrist/~3/Fu8-Gb2J3UQ/


Amy Tsay: What Healthy Relationships Teach Us About Healthy Communities

Пятница, 22 Августа 2014 г. 20:42 + в цитатник

In organizations where communities form (whether around a product, mission, or otherwise), there is often a sense of perplexity or trepidation around how to engage with them. What is the proper way to talk to community members? How do I work with them, and what can I do to keep the community healthy and growing? The good news is, if you know what it takes to have a healthy personal relationship, you already know how to build a healthy community.

Prioritize them

In a good relationship, we prioritize the other person. At Mozilla, the QA team makes it a point to respond to volunteer contributors within a day or two. A lack of response is one of the top reasons why people leave online communities, so it’s important not to keep them hanging. It doesn’t feel good to volunteer your time on a project only to be left waiting when you ask questions or request feedback, just as it would if your partner doesn’t return your phone calls.

Be authentic

Authenticity and honesty in a relationship are the building blocks of trust. If you make a mistake, admit it and set it right. Your tone and word choice will reflect your state of mind, so be aware of it when composing a message. When you come from a place of caring and desire to do what’s right for the community, instead of a place of fear or insecurity, your words and actions will foster trust.

Be appreciative

Strong relationships are formed when both parties value and appreciate each other. It’s a great feeling when you take out the trash or do the dishes, and it’s noticed and praised. Make it a ritual to say thanks to community members who make an impact, preferably on the spot, and publicly if possible and appropriate.

Be their champion

Be prepared to go to bat for the community. I was once in a relationship with a partner who would not defend me in situations where I was being mistreated; it didn’t end well. It feels nice to be advocated for, to be championed, and it creates a strong foundation. When you discover a roadblock or grievance, take the time to investigate and talk to the people who can make it right. The community will feel heard and valued.

Empathize

The processes and programs that support community participation require an understanding of motivation. To understand motivation, you have to be able to empathize. Everyone views the world from their own unique perspectives, so it’s important to try and understand them, even if they’re different from your own. 

Set expectations

Understand your organization’s limitations, as well as your own, and communicate them. If your partner expects you to be home at a certain time and you don’t show up, the anger you encounter likely has more to do with not being told you’re going to be late, than the lateness itself.

Guidelines and rules for participation are important components as well. I once featured content from a community member and was met by an angry online mob, because although the content was great, the member hadn’t reached a certain level of status. The guidelines didn’t cover eligibility for featuring, and up until then only longer-term participants had been featured, so the community’s expectations were not met.

Not apples to apples

I would never want to get anyone in trouble by suggesting they treat their community members exactly the same as their partners. Answering emails from anyone while having dinner with your loved one is not advised. The take-away is there isn’t any mystery to interacting with a community. Many of the ingredients for a healthy community are ones found in healthy relationships, and most reassuring of all, we already know what they are.


https://mozamy.wordpress.com/2014/08/22/what-healthy-relationships-teach-us-about-healthy-communities/


Robert Kaiser: Mirror, Mirror: Trek Convention and FLOSS Conferences

Пятница, 22 Августа 2014 г. 19:09 + в цитатник
It's been a while since I did any blogging, but that doesn't mean I haven't been doing anything - on the contrary, I have been too busy to blog, basically. We had a few Firefox releases where I scrambled until the last day of the beta phase to make sure we keep our crash rates as low as our users probably expect by now, I did some prototyping work on QA dashboards (with already-helpful results and more to come) and helped in other process improvements on the Firefox Quality team, worked with different teams to improve stability of our blocklist ping "ADI" data, and finally even was at a QA work week and a vacation in the US. So plenty of stuff done, and I hope to get to blog about at least some pieces of that in the next weeks and months.

That said, one major part of my recent vacation was the Star Trek Las Vegas Convention, which I attended the second time after last year. Since back then, I wanted to blog about some interesting parallels I found between that event (I can't compare to other conventions, as I've never been to any of those) and some Free, Libre and Open Source Software (FLOSS) conferences I've been to, most notably FOSDEM, but also the larger Mozilla events.
Of course, there's the big events in the big rooms and the official schedule - on the conferences it's the keynotes and presentations of developers about what's new in their software, what they learned or where we should go, on the convention it's actors and other guests talking about their experiences, what's new in their lives, and entertaining the crowd - both with questions from the audience. Of course, the topics are wildly different. And there's booths at both, also quite a bit different, as it's autograph and sales booths on one side, and mainly info booths on the other, though there are geeky T-shirts sold at both types of events. ;-)

The largest parallels I found, though, are about the mass of people that are there:
For one thing, the "hallway track" of talking to and meeting other attendees is definitely a main attraction and big piece of the life of the events on both "sides" there. Old friendships are being revived, new found, and the somewhat geeky commonalities are being celebrated and lead to tons of fun and involved conversations - not just the old fun bickering between vi and emacs or Kirk and Picard fans (or different desktop environments / different series and movies). :)
For the other, I learned that both types of events are in the end more about the "regular" attendees than the speakers, even if the latter end up being featured at both. Especially the recurring attendees go there because they want to meet and interact with all the other people going there, with the official schedule being the icing on the cake, really. Not that it would be unimportant or unneeded, but it's not as much the main attraction as people on the outside, and possibly even the organizers, might think. Also, going there means you do for a few days not have to hide your "geekiness" from your surroundings and can actively show and celebrate it. There's also some amount of a "do good" atmosphere in both those communities.
And both events, esp. the Trek and Mozilla ones, tend to have a very inclusive atmosphere of embracing everyone else, no matter what their physical appearance, gender or other social components. And actually, given how deeply that inclusive spirit has been anchored into the Star Trek productions by Gene Roddenberry himself, this might even run deeper in the fans there than it is in the FLOSS world. Notably, I saw a much larger amount of women and of colored people on the Star Trek Conventions than I see on FLOSS conferences - my guess is that at least a third of the Trek fans in Las Vegas were female, for example. I guess we need some more role models in they style of Nichelle Nichols and others in the FLOSS scene.

All in all, there's a lot of similarities and still quite some differences, but quite a twist on an alternate universe like it's depicted in Mirror, Mirror and other episodes - here it's a different crowd with a similar spirit and not the same people with different mindsets and behaviors.
As a very social person, I love attending and immersing myself in both types of events, and I somewhat wonder if and how we should have some more cross-pollination between those communities.
I for sure will be seen on more FLOSS and Mozilla events as well as more Star Trek conventions! :)

http://home.kairo.at/blog/2014-08/mirror_mirror_trek_convention_and_floss


Peter Bengtsson: premailer now with 100% test coverage

Пятница, 22 Августа 2014 г. 08:10 + в цитатник

One of my most popular GitHub Open Source projects is premailer. It's a python library for combining HTML and CSS into HTML with all its CSS inlined into tags. This is a useful and necessary technique when sending HTML emails because you can't send those with an external CSS file (or even a CSS style tag in many cases).

The project has had 23 contributors so far and as always people come in get some itch they have scratched and then leave. I really try to get good test coverage and when people come with code I almost always require that it should come with tests too.

But sometimes you miss things. Also, this project was born as a weekend hack that slowly morphed into an actual package and its own repository and I bet there was code from that day that was never fully test covered.

So today I combed through the code and plugged all the holes where there wasn't test coverage.
Also, I set up Coveralls (project page) which is an awesome service that hooks itself up with Travis CI so that on every build and every Pull Request, the tests are run with --with-cover on nosetests and that output is reported to Coveralls.

The relevant changes you need to do are:

1) You need to go to coveralls.io (sign in with your GitHub account) and add the repo.
2) Edit your .travis.yml file to contain the following:

before_install:
    - pip install coverage
...
after_success:
    - pip install coveralls
    - coveralls

And you need to execute your tests so that coverage is calculated (the coverage module stores everything in a .coverage file which coveralls analyzes and sends). So in my case I change to this:

script:
    - nosetests premailer --with-cover --cover-erase --cover-package=premailer

3) You must also give coveralls some clues. So it reports on only the relevant files. Here's what mine looked like:

[run]
source = premailer

[report]
omit = premailer/test*

Now, I get to have a cute "coverage: 100%" badge in the README and when people post pull requests Coveralls will post a comment to reflect how the pull request changes the test coverage.

I am so grateful for all these wonderful tools. And it's all free too!

http://www.peterbe.com/plog/premailer-100percent-coverage


Mozilla WebDev Community: Beer and Tell – August 2014

Четверг, 21 Августа 2014 г. 22:26 + в цитатник

Once a month, web developers from across the Mozilla Project get together to upvote stories on Hacker News from each of our blogs. While we’re together, we usually end up sharing a bit about our side projects over beers, which is why we call this meetup “Beer and Tell”.

There’s a wiki page available with a list of the presenters, as well as links to their presentation materials. There’s also a recording available courtesy of Air Mozilla.

Frederik Braun: Room Availability in the Berlin Office

freddyb shared (via a ghost presentation by yours truly) a small webapp he made that shows the current availability of meeting rooms in the Mozilla Berlin office. The app reads room availability from Zimbra, which Mozilla uses for calendaring and booking meeting rooms. It also uses moment.js for rendering relative dates to let you know when a room will be free.

The discussion following the presentation brought up a few similar apps that other Mozilla offices had made to show off their availability, such as the Vancouver office’s yvr-conf-free and the Toronto office’s yyz-conf-free.

Nigel Babu: hgstats

nigelb shared (via another ghost presentation, this time split between myself and laura) hgstats, which shows publicly-available graphs of the general health of Mozilla’s mercurial servers. This includes CPU usage, load, swap, and more. The main magic of the app is to load images from graphite, which are publicly visible, while graphite itself isn’t.

nigelb has offered a bounty of beer for anyone who reviews the app code for him.

Pomax: Inkcyclopedia

Pomax shared an early preview of Inkcyclopedia, an online encyclopedia of ink colors. Essentially, Pomax bought roughly 170 different kinds of ink, wrote down samples with all of them, photographed them, and then collected those images along with the kind of ink used for each. Once finished, the site will be able to accept user-submitted samples and analyze them to attempt to identify the color and associate it with the ink used. Unsurprisingly, the site is able to do this using the RGBAnalyse library that Pomax shared during the last Beer and Tell, in tandem with RgbQuant.js.

Sathya Gunasekaran: screen-share

gsathya shared a screencast showing off a project that has one browser window running a WebGL game and sharing its screen with another browser window via WebRTC. The demo currently uses Chrome’s desktopCapture API for recording the screen before sending it to the listener over WebRTC.


Alas, we were unable to beat Hacker News’s voting ring detection. But at least we had fun!

If you’re interested in attending the next Beer and Tell, sign up for the dev-webdev@lists.mozilla.org mailing list. An email is sent out a week beforehand with connection details. You could even add yourself to the wiki and show off your side-project!

See you next month!

https://blog.mozilla.org/webdev/2014/08/21/beer-and-tell-august-2014/



Поиск сообщений в rss_planet_mozilla
Страницы: 472 ... 74 73 [72] 71 70 ..
.. 1 Календарь