-Поиск по дневнику

Поиск сообщений в rss_planet_mozilla

 -Подписка по e-mail

 

 -Постоянные читатели

 -Статистика

Статистика LiveInternet.ru: показано количество хитов и посетителей
Создан: 19.06.2007
Записей:
Комментариев:
Написано: 7

Planet Mozilla





Planet Mozilla - https://planet.mozilla.org/


Добавить любой RSS - источник (включая журнал LiveJournal) в свою ленту друзей вы можете на странице синдикации.

Исходная информация - http://planet.mozilla.org/.
Данный дневник сформирован из открытого RSS-источника по адресу http://planet.mozilla.org/rss20.xml, и дополняется в соответствии с дополнением данного источника. Он может не соответствовать содержимому оригинальной страницы. Трансляция создана автоматически по запросу читателей этой RSS ленты.
По всем вопросам о работе данного сервиса обращаться со страницы контактной информации.

[Обновить трансляцию]

Mozilla B-Team: happy bmo push day!

Среда, 31 Октября 2018 г. 00:28 + в цитатник

happy bmo push day!

release tag

the following changes have been pushed to bugzilla.mozilla.org:

  • [1487171] Allow setting bug flags when creating/updating attachment with API
  • [1497077] Convert links, image/iframe sources, form actions to absolute path
  • [1469733] Fix scrolling glitch on Safari
  • [1496057] Security bugs report october update
  • [1499905] Update BMO enter bug workflow to include Data Science
  • [1370855] Add a…

View On WordPress

https://mozilla-bteam.tumblr.com/post/179599079038


Mozilla B-Team: happy bmo push day!

Среда, 31 Октября 2018 г. 00:28 + в цитатник

happy bmo push day!

release tag

the following changes have been pushed to bugzilla.mozilla.org:

  • [1487171] Allow setting bug flags when creating/updating attachment with API
  • [1497077] Convert links, image/iframe sources, form actions to absolute path
  • [1469733] Fix scrolling glitch on Safari
  • [1496057] Security bugs report october update
  • [1499905] Update BMO enter bug workflow to include Data Science
  • [1370855] Add a…

View On WordPress

https://mozilla-bteam.tumblr.com/post/179599079038


Chris H-C: New Laptop Setup: Stickers

Вторник, 30 Октября 2018 г. 20:46 + в цитатник

Photograph of a laptop partially covered in mozilla stickers

As I mentioned before, I underwent a hardware refresh and set up a laptop. However, I failed to mention the most important consideration: laptops come with blank canvases waiting for stickers. So let’s dive into what it means to have an empty laptop lid and a drawer full of stickers.

On :bwinton’s recommendation I acquired a blank Gelaskin. This will in the future allow me to remove and retain all of the stickers when I retire the laptop, or when I just decide to start afresh.

I was surprised twice by the ‘skin. Firstly, I was expecting it to be clear. Luckily, a white top on the black laptop makes a strong statement that I like so I’m a little glad. Also a surprise: the curved edges. There is a very clear type of laptop this is for (macbooks) and mine is not of that type. These were just minor things. A little trimming of the long edge later, and I was in business.

With the canvas thus prepared the question became how to fill it. I imagine there are as many schools of thought in this as there are people with laptops, but this is my approach when I have a blank laptop and quite a few stickers stockpiled:

I’m loathe to add anything with dates on it that I didn’t bring this laptop to. This means no All Hands stickers (until December), and no conferences.

I’m also not planning on layering them over each other too much. Corners can overlap, but aside from censoring the top-right sticker’s profanity (about which I am unduly proud) I want to let them speak for themselves in their entirety.

I left some significant space. Not because I know how to use negative space (I mean, look at it) but because I expect to greatly increase my supply of stickers that need applying in the near term and I’ll need the room to grow.

It makes for an imbalanced, unjustified, off-centre melange of stickers that I just have to make my peace with. I will never not see the fractional radians the Berlin sticker is off. No one will fail to notice the millimetres from true the Mozilla sticker in the “middle” is. I didn’t use a ruler, and now they are applied there is no way to change them. So sticker application at some point becomes an intersection between accepting one’s fallibility and learning to accept the results of permanent actions.

And it is also an exercise in impermanence. The top-left is the last of my “Telemetry From Outer Space” rectangles. I could (and probably will) print more, but I will change the wording, and the colouration will be slightly different. This is the last of that cohort, never again to exist unstuck.

But enough philosophizing. Stickers! They’re great.

If you want to make some to bring to an event in the near future, I have a guide you may find useful.

:chutten

https://chuttenblog.wordpress.com/2018/10/30/new-laptop-setup-stickers/


Cameron Kaiser: The Space Grey Mac mini is too little, too late and too much

Вторник, 30 Октября 2018 г. 19:42 + в цитатник
I've got a stack of G4 minis here, one of which will probably be repurposed to act as a network bridge with NetBSD, and a white C2D mini which is my 10.6 test machine. They were good boxes when I needed them and they still work great, but Apple to its great shame has really let the littlest Mac rot. Now we have the Space Grey mini, ignoring the disappointing and now almost mold-encrusted 2014 refresh which was one step forward and two steps back, starting at $800 with a quadcore i3, 8GB of memory and 128GB of SSD. The pictures on Ars Technica also show Apple's secret sauce T2 chip on-board.

If you really, really, really want an updated mini, well, here's your chance. But with all the delays in production and Apple's bizarrely variable loadouts over the years the mini almost doesn't matter anymore and the price isn't cheap Mac territory anymore either (remember that the first G4 Mac mini started at $500 in 2005 and people even complained that was too much). If you want low-end, then you're going to buy a NUC-type device or an ARM board and jam that into a tiny case, and you can do it for less unless you need a crapload of external storage (the four Thunderbolt 3 ports on the Space Grey mini are admittedly quite compelling). You can even go Power ISA if you want to: the "Tiny Talos" a/k/a Raptor Blackbird is just around the corner, with the same POWER9 goodness of the bigger T2 systems in a single socket and the (in fairness: unofficial) aim is to get it under $700. That's what I'm gonna buy. Heck, if I didn't have the objections I do to x86, I could probably buy a lot more off-the-shelf for $800 and get more out of it since I'm already transitioning to Linux at home anyway. Why would I bother with chaining myself to the sinking ship that is macOS when it's clear Apple's bottom line is all about iOS?

Don't get me wrong: I'm glad to see Apple at least make a token move to take their computer lines seriously and the upgrade, though horribly delayed and notable more for its tardiness than what's actually in it, is truly welcome. And it certainly would build optimism and some much-needed good faith for whatever the next Mac Pro is being more than vapourware. But I've moved on and while I like my old minis, this one wouldn't lure me back.

http://tenfourfox.blogspot.com/2018/10/the-space-grey-mac-mini-is-too-little.html


Firefox UX: The User Journey for Firefox Extensions Discovery

Вторник, 30 Октября 2018 г. 07:44 + в цитатник

The ability to customize and extend Firefox are important parts of Firefox’s value to many users. Extensions are small tools that allow developers and users who install the extensions to modify, customize, and extend the functionality of Firefox. For example, during our workflows research in 2016, we interviewed a participant who was a graduate student in Milwaukee, Wisconsin. While she used Safari as her primary browser for common browsing, she used Firefox specifically for her academic work because of the extension Zotero was the best choice for keeping track of her academic work and citations within the browser. The features offered by Zotero aren’t built into Firefox, but added to Firefox by an independent developer.

Popular categories of extensions include ad blockers, password managers, and video downloaders. Given the variety of extensions and the benefits to customization they offer, why is it that only 40% of Firefox users have installed at least one extension? Certainly, some portion of Firefox users may be aware of extensions but have no need or desire to install one. However, some users could find value in some extensions but simply may not be aware of the existence of extensions in the first place.

Why not? How can Mozilla facilitate the extension discovery process?

A fundamental assumption about the extension discovery process is that users will learn about extensions through the browser, through word of mouth, or through searching to solve a specific problem. We were interested in setting aside this assumption and to observe the steps participants take and the decisions they make in their journey toward possibly discovering extensions. To this end, the Firefox user research team ran two small qualitative studies to understand better how participants solved a particular problem in the browser that could be solved by installing an extension. Our study helped us understand how participants do — or do not — discover a specific category of extension.

Our Study

Because ad blockers are easily the most popular type of extension in installation volume on Firefox (more generally, some kind of ad blocking is being used on 615 million devices worldwide in 2017), we chose them as an obvious candidate for this study. Their popularity and many users’ perception of some advertising as invasive and distracting is a good mix that we believed could pose a common solvable problem for participants to engage with. (Please do not take our choice of this category of extensions for this study as an official or unofficial endorsement or statement from Mozilla about ad blocking as a user tool.)

Some ad blocker extensions currently available on AMO.

In order to understand better how users might discover extensions (and why those chose a particular path), we conducted two small qualitative studies. The first study was conducted in-person in our Portland, Oregon offices with five participants. To gather more data, we conducted a remote, unmoderated study using usertesting.com with nine participants in the US, UK, and Canada. In total, we worked with fourteen participants. These participants used Firefox as their primary web browser and were screened to make certain that they had no previous experience with extensions in Firefox.

In both iterations of the study, we asked participants to complete the following task in Firefox:

Let’s imagine for a moment that you are tired of seeing advertising in Firefox while you are reading a news story online. You feel like ads have become too distracting and you are tired of ads following you around while you visit different sites. You want to figure out a way to make advertisements go away while you browse in Firefox. How would you go about doing that? Show us. Take as much time as you need to reach a solution that you are satisfied with.

Participants fell into roughly two categories: those who installed an ad blocking extension — such as Ad Block Plus or uBlock Origin — and those who did not. The participants who did not install an extension came up with a more diverse set of solutions.

First, among the fourteen participants across the two studies, only six completed the task by discovering an ad blocking extension (two of these did not install the extension for other reasons). The participants who completed the task in this manner all followed a similar path: they searched via a search engine to seek an answer. Most of these participants were not satisfied with accepting only the extensions that were surfaced from search results. Instead, these participants exhibited a high degree of information literacy and used independent reviews to assess and triangulate the quality, reputation, and reliability of the ad blocking extension they selected. More generally, they used the task as an opportunity to build knowledge about online advertising and ad blocking; they looked outside of their familiar tools and information sources.

User journey for a participant who discovered an ad blocker to complete the task.

One participant (who we’ll call “Andrew”) in our first study is a technically savvy user but does not frequently customize the software he uses. When prompted by our task, he said, “I’m going to do what I always do in situations where I don’t know the answer…I’m going to search around for information.” In this case, Andrew recalled that he had heard someone once mention something about blocking ads online and searched for “firefox ad block” in DuckDuckGo. Looking through the search results, he found a review on CNET.com that described various ad blockers, including uBlock Origin. While Andrew says that he “trusts CNET,” he wanted to find a second review source that would provide him with more information. After reading an additional review, he follows a link to the uBlock Firefox extension page, reads the reviews and ratings there (specifically, he is looking for a large number of positive reviews as a proxy for reliability and trustworthiness), and installs the extension. For his final step, Andrew visits two sites (wired.com and huffingtonpost.com) that he believes use a large number of display ads and confirms that the ad blocker is suppressing their display.

The remaining participants did not install an ad blocking extension to complete the task. Unlike the participants who ultimately installed an ad blocking extension, these participants did not use a search engine or an outside information source in order to seek a solution to the task. Instead, they fell back on and looked inside tools and resources with which they were already familiar. Also, unlike the participants who were successful installing an ad blocking extension, these participants typically said they were dissatisfied with their solutions.

These participants generally followed two main routes to complete the task: they either looked within the browser in Preferences or other menus or they looked into the ads themselves. For the participants who looked in Preferences, many found privacy and security related options, but none related to blocking advertising. These participants did not seek out outside information sources via a search engine and eventually they gave up on the task.

A participant (“Marion”) in our first study recalled that she had seen a link once in an advertisement offering her “Ad Choices.” Ad Choices is a self-regulatory program in North America and the EU that is “aimed to give consumers enhanced transparency and control.” Marion navigated to a site she remembered had display advertising that offered a link to information on the program. She followed the link which took her to a long list of options for opting out of specific advertising themes and networks. Marion selected and deselected the choices she believed were relevant to her. Unfortunately, the confirmations from the various ad networks for her actions either did not work or did not provide much certainty into her actions. She navigated back to the site that she visited previously and could not discern any real difference in display advertising before and after enrolling in the Ad Choices program.

Marion attempting to opt out of advertising online using Ad Choices.

There were some limitations with this research. The task provided to participants is only a single situation that would cue them to seek out an extension to solve a problem. Rather, we imagine a cumulative sense of frustration might prompt users to seek out a solution. We would anticipate that they may hear about ad blocking from word of mouth or a news story (as participants in a similar study run at the same time with participants who were familiar with extensions demonstrated). Also, ad blocking is a specific domain of extension that has its own players and ecosystem. If we asked participants for example to find a solution to managing their passwords, we would expect a different set of solutions.

At a high level, those participants who displayed elements of traditionally defined digital information literacy were successful in discovering extensions to complete the task. This observation emerged through the analysis process in our second study. In future research, it would be useful to include some measurement of participants’ digital literacy or include related tasks to determine the strength of this relationship.

Nevertheless, this study provided us with insight into how users seek out information to discover solutions to problems in their browser. We gathered useful data on users’ mental model of the software discovery process, the information sources they looked to discover new functionality, and finally what kinds of information was important, reliable, and trustworthy. Those participants who did not seek out independent answers, fell back on the tools and information with which they were already familiar; these participants were also less satisfied with their solutions. These results demonstrate a need to provide cues to participants about Extensions within tools they are familiar with already such as Firefox’s Preferences/Options menu. Further, we need to surface cues in other channels (such as search engine results pages) that could lead Firefox users to relevant extensions.

Thanks to Philip Walmsley, Markus Jaritz, and Emanuela Damiani for their participation in this research project. Thanks to Jennifer Davidson and Heather McGaw for proofreading and suggestions.


The User Journey for Firefox Extensions Discovery was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

https://medium.com/firefox-ux/the-user-journey-for-firefox-extensions-discovery-83f03b5f8ee9?source=rss----da1138ddf2cd---4


Firefox UX: Why Do We Conduct Qualitative User Research?

Вторник, 30 Октября 2018 г. 07:35 + в цитатник

The following post is based on a talk I presented at AmuseConf in Budapest about interviewing users.

I recently had a conversation with a former colleague who now works for a major social network. In the course of our conversation this former colleague said to me, “You know, we have all the data in the world. We know what our users are doing and have analytics to track and measure it, but we don’t know why they do it. We don’t have any frameworks for understanding behavior outside of what we speculate about inside the building.”

In many technology organizations, the default assumption of user research is that it will be primarily quantitative research such as telemetry analyses, surveys, and A/B testing. Technology and business organizations often default to a positivist worldview and subsequently believe that quantitative results that provide numeric measures have the most value. The hype surrounding big data methods (and the billions spent on marketing by vendors making certain you know about their enterprise big data tools) goes hand-in-hand with the perceived correctness of this set of assumptions. Given this ecosystem of belief, it’s not surprising that user research employing quantitative methods is perceived by many in our industry as the only user research an organization would need to conduct.

I work as a Lead User Researcher on Firefox. While I do conduct some quantitative user research, the focus of most of my work is qualitative research. In the technology environment described above, the qualitative research we conduct is sometimes met with skepticism. Some audiences believe our work is too “subjective” or “not reproducible.” Others may believe we simply run antiquated, market research-style focus groups (for the record, the Mozilla UR team doesn’t employ focus groups as a methodology).

I want to explain why qualitative research methods are essential for technology user research because of one well-documented and consistently observed facet of human social life: the concept of homophily.

This is a map of New York City based on the ethnicity of residents. Red is White, Blue is Black, Green is Asian, Orange is Hispanic, Yellow is Other, and each dot is 25 residents. Of course, there are historical and cultural reasons for the clustering, but these factors are part of the overall social dynamic.
Source:
https://www.flickr.com/photos/walkingsf/

Homophily is the tendency of individuals to associate and bond with similar others (the now classic study of homophily in social networks). In other words, individuals are more likely to associate with others based on similarities rather than differences. Social scientists have studied social actors associating along multiple types of ascribed characteristics (status homophily) including gender, ethnicity, economic and social status, education, and occupation. Further, homophily exists among groups of individuals based on internal characteristics (value homophily) including values, beliefs, and attitudes. Studies have demonstrated both status and value homophilic clustering in smaller ethnographic studies and larger scale analyses of social network associations such as political beliefs on Twitter.

Photos on Flickr taken in NY by tourists and locals. Blue pictures are by locals. Red pictures are by tourists. Yellow pictures might be by either. Source: https://www.flickr.com/photos/walkingsf

I bring up this concept to emphasize how those of us who work in technology form our own homophilic bubble. We share similar experiences, information, beliefs, and processes about not just how to design and build products and services, but also in how many of us use those products and services. These beliefs and behaviors become reinforced through the conversations we have with colleagues, the news we read in our press daily, and the conferences we attend to learn from others within our industry. The most insidious part of this homophilic bubble is how natural and self-evident the beliefs, knowledge, and behaviors generated within it appears to be.

Here’s another fact: other attitudes, beliefs, and motivations exist outside of our technology industry bubble. Many members of these groups use our products and services. Other groups share values and statuses that are similar to the technology world, but there are other, different values and different statuses. Further, there are values and statuses that are radically different from ours so as to be not assumed in the common vocabulary of our own technology industry homophilic bubble. To borrow from former US Secretary of Defense, Donald Rumsfeld, “there are also unknown unknowns, things we don’t know we don’t know.”

This is all to say that insights, answers, and explanations are limited by the breadth of a researcher’s understanding of users’ behaviors. The only way to increase the breadth of that understanding is by actually interacting with and investigating behaviors, beliefs, and assumptions outside of our own behaviors, beliefs, and assumptions. Qualitative research provides multiple methodologies for getting outside of our homophilic bubble. We conduct in situ interviews, diary studies, and user tests (among other qualitative methods) in order to uncover these insights and unknown unknowns. The most exciting part of my own work is feeling surprised with a new insight or observation of what our users do, say, and believe. In research on various topics, we’ve seen and heard so many surprising answers.

There is no one research method that satisfies answering all of our questions. If the questions we are asking about user behavior, attitudes, and beliefs are based solely on assumptions formed in our homophilic bubble, we will not generate accurate insights about our users no matter how large the dataset. In other words, we only know what we know and can only ask questions framed about what we know. If we are measuring, we can only measure what we know to ask. Quantitative user research needs qualitative user research methods in order to know what we should be measuring and to provide examples, theories, and explanations. Likewise, qualitative research needs quantitative research to measure and validate our work as well as to uncover larger patterns we cannot see.

An example of quantitative and qualitative research working iteratively.

It is a disservice to users and ourselves to ask only how much or how often and to avoid understanding why or how. User research methods work best as an accumulation of triangulation points of data in a mutually supportive, on-going inquiry. More data points from multiple methods mean deeper insights and a deeper understanding. A deeper connection with our users means more human-centered and usable technology products and services. We can only get at that deeper connection by leaving the technology bubble and engaging with the complex, messy world outside of it. Have the courage to feel surprised and your assumptions challenged.

(Thanks to my colleague Gemma Petrie for her thoughts and suggestions.)

Originally published at blog.mozilla.org on October 30, 2014.


Why Do We Conduct Qualitative User Research? was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

https://medium.com/firefox-ux/why-do-we-conduct-qualitative-user-research-ed28d8cfe2e2?source=rss----da1138ddf2cd---4


Francois Marier: Installing Vidyo on Ubuntu 18.04

Вторник, 30 Октября 2018 г. 01:46 + в цитатник

Following these instructions as well as the comments in there, I was able to get Vidyo, the proprietary videoconferencing system that Mozilla uses internally, to work on Ubuntu 18.04 (Bionic Beaver). The same instructions should work on recent versions of Debian too.

Installing dependencies

First of all, install all of the package dependencies:

sudo apt install libqt4-designer libqt4-opengl libqt4-svg libqtgui4 libqtwebkit4 sni-qt overlay-scrollbar-gtk2 libcanberra-gtk-module

Then, ensure you have a system tray application running. This should be the case for most desktop environments.

Building a custom Vidyo package

Download version 3.6.3 from the CERN Vidyo Portal but don't expect to be able to install it right away.

You need to first hack the package in order to remove obsolete dependencies.

Once that's done, install the resulting package:

sudo dpkg -i vidyodesktop-custom.deb

Packaging fixes and configuration

There are a few more things to fix before it's ready to be used.

First, fix the ownership on the main executable:

sudo chown root:root /usr/bin/VidyoDesktop

Then disable autostart since you don't probably don't want to keep the client running all of the time (and listening on the network) given it hasn't received any updates in a long time and has apparently been abandoned by Vidyo:

sudo rm /etc/xdg/autostart/VidyoDesktop.desktop

Remove any old configs in your home directory that could interfere with this version:

rm -rf ~/.vidyo ~/.config/Vidyo

Finally, launch VidyoDesktop and go into the settings to check "Always use VidyoProxy".

http://feeding.cloud.geek.nz/posts/installing-vidyo-on-ubuntu-1804/


Daniel Pocock: FOSDEM 2019 Real-Time Communications Call for Participation

Понедельник, 29 Октября 2018 г. 22:57 + в цитатник

FOSDEM is one of the world's premier meetings of free software developers, with over five thousand people attending each year. FOSDEM 2019 takes place 2-3 February 2019 in Brussels, Belgium.

This email contains information about:

  • Real-Time communications dev-room and lounge,
  • speaking opportunities,
  • volunteering in the dev-room and lounge,
  • social events (the legendary FOSDEM Beer Night and Saturday night dinners provide endless networking opportunities),
  • the Planet aggregation sites for RTC blogs

Call for participation - Real Time Communications (RTC)

The Real-Time Communications dev-room and Real-Time lounge is about all things involving real-time communication, including: XMPP, SIP, WebRTC, telephony, mobile VoIP, codecs, peer-to-peer, privacy and encryption. The dev-room is a successor to the previous XMPP and telephony dev-rooms. We are looking for speakers for the dev-room and volunteers and participants for the tables in the Real-Time lounge.

The dev-room is only on Sunday, 3rd of February 2019. The lounge will be present for both days.

To discuss the dev-room and lounge, please join the Free RTC mailing list.

To be kept aware of major developments in Free RTC, without being on the discussion list, please join the Free-RTC Announce list.

Speaking opportunities

Note: if you used FOSDEM Pentabarf before, please use the same account/username

Real-Time Communications dev-room: deadline 23:59 UTC on 2nd of December. Please use the Pentabarf system to submit a talk proposal for the dev-room. On the "General" tab, please look for the "Track" option and choose "Real Time Communications devroom". Link to talk submission.

Other dev-rooms and lightning talks: some speakers may find their topic is in the scope of more than one dev-room. It is encouraged to apply to more than one dev-room and also consider proposing a lightning talk, but please be kind enough to tell us if you do this by filling out the notes in the form.

You can find the full list of dev-rooms on this page and apply for a lightning talk at https://fosdem.org/submit

Main track: the deadline for main track presentations is 23:59 UTC 3 November. Leading developers in the Real-Time Communications field are encouraged to consider submitting a presentation to the main track.

First-time speaking?

FOSDEM dev-rooms are a welcoming environment for people who have never given a talk before. Please feel free to contact the dev-room administrators personally if you would like to ask any questions about it.

Submission guidelines

The Pentabarf system will ask for many of the essential details. Please remember to re-use your account from previous years if you have one.

In the "Submission notes", please tell us about:

  • the purpose of your talk
  • any other talk applications (dev-rooms, lightning talks, main track)
  • availability constraints and special needs

You can use HTML and links in your bio, abstract and description.

If you maintain a blog, please consider providing us with the URL of a feed with posts tagged for your RTC-related work.

We will be looking for relevance to the conference and dev-room themes, presentations aimed at developers of free and open source software about RTC-related topics.

Please feel free to suggest a duration between 20 minutes and 55 minutes but note that the final decision on talk durations will be made by the dev-room administrators based on the received proposals. As the two previous dev-rooms have been combined into one, we may decide to give shorter slots than in previous years so that more speakers can participate.

Please note FOSDEM aims to record and live-stream all talks. The CC-BY license is used.

Volunteers needed

To make the dev-room and lounge run successfully, we are looking for volunteers:

  • FOSDEM provides video recording equipment and live streaming, volunteers are needed to assist in this
  • organizing one or more restaurant bookings (dependending upon number of participants) for the evening of Saturday, 2 February
  • participation in the Real-Time lounge
  • helping attract sponsorship funds for the dev-room to pay for the Saturday night dinner and any other expenses
  • circulating this Call for Participation (this text) to other mailing lists

Social events and dinners

The traditional FOSDEM beer night occurs on Friday, 1st of February.

On Saturday night, there are usually dinners associated with each of the dev-rooms. Most restaurants in Brussels are not so large so these dinners have space constraints and reservations are essential. Please subscribe to the Free-RTC mailing list for further details about the Saturday night dinner options and how you can register for a seat.

Spread the word and discuss

If you know of any mailing lists where this CfP would be relevant, please forward this email. If this dev-room excites you, please blog or microblog about it, especially if you are submitting a talk.

If you regularly blog about RTC topics, please send details about your blog to the planet site administrators:

Planet site Admin contact
All projects Free-RTC Planet (http://planet.freertc.org) contact planet@freertc.org
XMPP Planet Jabber (http://planet.jabber.org) contact ralphm@ik.nu
SIP Planet SIP (http://planet.sip5060.net) contact planet@sip5060.net
SIP (Espa~nol) Planet SIP-es (http://planet.sip5060.net/es/) contact planet@sip5060.net

Please also link to the Planet sites from your own blog or web site as this helps everybody in the free real-time communications community.

Contact

For any private queries, contact us directly using the address fosdem-rtc-admin@freertc.org and for any other queries please ask on the Free-RTC mailing list.

The dev-room administration team:

https://danielpocock.com/fosdem-2019-rtc-cfp


K Lars Lohn: Things Gateway - the Refrigerator and the Samsung Buttons

Понедельник, 29 Октября 2018 г. 21:47 + в цитатник

Perhaps I'm being overly effusive, but right now, the Samsung SmartThings Button is my Holy Grail of the Internet of Things.  Coupled with the Things Gateway from Mozilla and my own Web Thing API rule system, the Samsung Button grants me invincibility at solving some vexing Smart Home automation tasks.

Consider this problem: my kitchen is in an old decrepit farm house built in 1920.  The kitchen has a challenging layout with no good space for any modern appliance.  The only wall for the refrigerator is annoyingly narrower than an average refrigerator.  Unfortunately, the only switches for the kitchen and pantry lights are on that wall, too.  The refrigerator blocks the switches to the point they can only be felt, not seen.

For twenty years, I've been fine slipping my hand into the dusty cobwebs behind the refrigerator to turn on the lights.  I can foresee the end of this era.  I'm imagining two Samsung Buttons magnetically tacked to a convenient and accessible side of the refrigerator: one for the pantry and one for the kitchen.

The pantry light is the most common light to be inadvertently left on for hours at a time.  Nobody wants to reach behind the refrigerator to turn off the light.  This gives me an idea.  I'm going to make the new pantry button turn on the light for only 10 minutes at a time.  Rarely is anyone in there for longer than that.  Sometimes, however, it is handy to have it on for longer.  So I'll make each button press add ten minutes to the timer.  Need the light on for 30 minutes?  Press the button three times.  To appease the diligent one in the household that always remembers to turn lights off, a long press to the button turns the light off and cancels the timer:
class PantryLightTimerRule(Rule):

def register_triggers(self):
self.delay_timer = DelayTimer(self.config, "adjustable_delay", "10m")
self.PantryButton.subscribe_to_event('pressed')
self.PantryButton.subscribe_to_event('longPressed')
return (self.PantryButton, self.delay_timer, self.PantryLight)

def action(self, the_triggering_thing, the_trigger_event, new_value):
if the_triggering_thing is self.PantryButton and the_trigger_event == 'pressed':
if self.PantryLight.on:
self.delay_timer.add_time() # add ten minutes
else:
self.PantryLight.on = True

elif the_triggering_thing is self.PantryButton and the_trigger_event == 'longPressed':
self.PantryLight.on = False

elif the_triggering_thing is self.delay_timer:
self.PantryLight.on = False

elif the_triggering_thing is self.PantryLight and new_value is False:
self.delay_timer.cancel()

elif the_triggering_thing is self.PantryLight and new_value is True:
self.delay_timer.add_time() # add ten minutes
(see this code in situ in the timer_light_rule.py file in the pywot rule system demo directory)
Like all rules, there are two parts: registering the things that trigger the rule and the action that the rule takes when triggered.

This rules uses three triggers: the PantryButton (as played by Samsung), a timer called delay_timer, and the PantryLight (as played by an IKEA bulb).  The register_triggers method creates the timer with a default time increment of 10 minutes.  It subscribes to the "pressed" and "longPressed" events that the PantryButton can emit.  It returns a tuple containing these two things coupled with the reference to the PantryLight itself.

How is the PantryLight itself considered to be a trigger?  Since the bulb in the pantry is to be a smart bulb, any other controller in the house could theoretically turn it on.  No matter what turns it on, I want my timer rule to eventually turn it off.  Anytime something turns that light on, my rule will fire.

In the action method, in the last two lines you can see how I exploit the idea that anything turning the light on triggers the timer.  If the_triggering_thing is the PantryLight and it was turned from off to on, this rule will add time to the delay_timer.  If the delay_timer wasn't running, adding time to it will start it.

Further, going back up two more lines, you can see how turning off the light by any means cancels the delay_timer.

Going back up another line, you can see how I handle the timer naturally timing out.  It just turns off the light.  Yeah, that will result in the action method getting a message about the light turning off.  We've already seen that action will try to cancel the timer, but in this case, the timer isn't running anymore, so the cancel is ignored.

Finally, let's consider the top two cases of the action method.  For both, the_triggering_thing is the PantryButton. If the button is longPressed, it turns off the light which in turn will cancel the timer.  If there is a regular short press, the timer gets an additional ten minutes if the light was already on or the light gets turned on if it was off.

This seems to work really well, but it hasn't been in place for more than a few hours...
Now there's the case of the main kitchen light itself.  The room is wired for a single bulb in the middle of the ceiling.  That's worthless for properly lighting the space.  I adapted the bulb socket to be an outlet and then an LED shoplight has just enough cord to reach to over the stove.  There's another over the kitchen counter and a third over the sink.  Each light has a different way to turn it on, all of which are awkward in one way or another.

This will change with the second Samsung button.  It seems we only use the kitchen lights in certain combinations.  Multiple presses to the Samsung button will cycle through these combinations in this order:
  1. all off
  2. stove light only
  3. stove light & counter light
  4. stove light, counter light & sink light
  5. counter light & sink light
  6. sink light only
  7. counter light only
class CombinationLightRule(Rule):

def initial_state(self):
self.index = 0
self.combinations = [
(False, False, False),
(True, False, False),
(True, True, False),
(True, True, True),
(False, True, True),
(False, False, True),
(False, True, False),
]

def register_triggers(self):
self.KitchenButton.subscribe_to_event('pressed')
self.KitchenButton.subscribe_to_event('longPressed')
return (self.KitchenButton, )

def set_bulb_state(self):
self.StoveLight.on = self.combinations[self.index][0]
self.CounterLight.on = self.combinations[self.index][1]
self.SinkLight.on = self.combinations[self.index][2]

def action(self, the_triggering_thing, the_trigger_event, new_value):
if the_trigger_event == "pressed":
self.index = (self.index + 1) % len(self.combinations)
self.set_bulb_state()

elif the_trigger_event == "longPressed":
self.index = 0
self.set_bulb_state()
(see this code in situ in the combination_light_rule.py file in the pywot rule system demo directory)
This is just a simple finite state machine.  The three shop lights are controlled by the list, combinations, referenced by an index.  The only trigger is the KitchenButton (again, played by a Samsung Button).  If the button is short pressed the index is incremented modulo the number of combinations.  If the button is longPressed, the finite state machine is reset to state 0 and all the shop lights turned off.

These two Rules can be found in my pywot github repo.  To learn how to experiment with the Things Gateway with Python, see the Mozilla Project Things and my own post about setting up pywot.

I can think of so many applications for these Samsung Buttons, I may need to acquire a pallet of them...

http://www.twobraids.com/2018/10/things-gateway-refrigerator-and-samsung.html


Hacks.Mozilla.Org: Testing Privacy-Preserving Telemetry with Prio

Понедельник, 29 Октября 2018 г. 21:26 + в цитатник

Building a browser is hard; building a good browser inevitably requires gathering a lot of data to make sure that things that work in the lab work in the field. But as soon as you gather data, you have to make sure you protect user privacy. We’re always looking at ways to improve the security of our data collection, and lately we’ve been experimenting with a really cool technique called Prio.

Currently, all the major browsers do more or less the same thing for data reporting: the browser collects a bunch of statistics and sends it back to the browser maker for analysis; in Firefox, we call this system Telemetry. The challenge with building a Telemetry system is that data is sensitive. In order to ensure that we are safeguarding our users’ privacy, Mozilla has built a set of transparent data practices which determine what we can collect and under what conditions. For particularly sensitive categories of data, we ask users to opt-in to the collection and ensure that the data is handled securely.

We understand that this requires users to trust Mozilla — that we won’t misuse their data, that the data won’t be exposed in a breach, and that Mozilla won’t be compelled to provide access to the data by another party. In the future, we would prefer users to not have to just trust Mozilla, especially when we’re collecting data that is sufficiently sensitive to require an opt-in. This is why we’re exploring new ways to preserve your data privacy and security without compromising access to the information we need to build the best products and services.

Obviously, not collecting any data at all is best for privacy, but it also blinds us to real issues in the field, which makes it hard for us to build features — including privacy features — which we know our users want. This is a common problem and there has been quite a bit of work on what’s called “privacy-preserving data collection”, including systems developed by Google (RAPPOR, PROCHLO) and Apple. Each of these systems has advantages and disadvantages that are beyond the scope of this post, but suffice to say that this is an area of very active work.

In recent months, we’ve been experimenting with one such system: Prio, developed by Professor Dan Boneh and PhD student Henry Corrigan-Gibbs of Stanford University’s Computer Science department. The basic insight behind Prio is that for most purposes we don’t need to collect individual data, but rather only aggregates. Prio, which is in the public domain, lets Mozilla collect aggregate data without collecting anyone’s individual data. It does this by having the browser break the data up into two “shares”, each of which is sent to a different server. Individually the shares don’t tell you anything about the data being reported, but together they do. Each server collects the shares from all the clients and adds them up. If the servers then take their sum values and put them together, the result is the sum of all the users’ values. As long as one server is honest, then there’s no way to recover the individual values.

We’ve been working with the Stanford team to test Prio in Firefox. In the first stage of the experiment we want to make sure that it works efficiently at scale and produces the expected results. This is something that should just work, but as we mentioned before, building systems is a lot harder in practice than theory. In order to test our integration, we’re doing a simple deployment where we take nonsensitive data that we already collect using Telemetry and collect it via Prio as well. This lets us prove out the technology without interfering with our existing, careful handling of sensitive data. This part is in Nightly now and reporting back already. In order to process the data, we’ve integrated support for Prio into our Spark-based telemetry analysis system, so it automatically talks to the Prio servers to compute the aggregates.

Our initial results are promising: we’ve been running Prio in Nightly for 6 weeks, gathered over 3 million data values, and after fixing a small glitch where we were getting bogus results, our Prio results match our Telemetry results perfectly. Processing time and bandwidth also look good. Over the next few months we’ll be doing further testing to verify that Prio continues to produce the right answers and works well with our existing data pipeline.

Most importantly, in a production deployment we need to make sure that user privacy doesn’t depend on trusting a single party. This means distributing trust by selecting a third party (or parties) that users can have confidence in. This third party would never see any individual user data, but they would be responsible for keeping us honest by ensuring that we never see any individual user data either. To that end, it’s important to select a third party that users can trust; we’ll have more to say about this as we firm up our plans.

We don’t yet have concrete plans for what data we’ll protect with Prio and when. Once we’ve validated that it’s working as expected and provides the privacy guarantees we require, we can move forward in applying it where it is needed most. Expect to hear more from us in future, but for now it’s exciting to be able to take the first step towards privacy preserving data collection.

The post Testing Privacy-Preserving Telemetry with Prio appeared first on Mozilla Hacks - the Web developer blog.

https://hacks.mozilla.org/2018/10/testing-privacy-preserving-telemetry-with-prio/


The Firefox Frontier: Your browser is hijacked, now what?

Понедельник, 29 Октября 2018 г. 17:20 + в цитатник

You’re looking for a cornbread recipe, but your search is redirected to a site crammed with ads. Your homepage is now a billboard for the most dubious products and services. … Read more

The post Your browser is hijacked, now what? appeared first on The Firefox Frontier.

https://blog.mozilla.org/firefox/your-browser-is-hijacked-now-what/


Rabimba: ARCore and Arkit: What is under the hood : Anchors and World Mapping (Part 1)

Понедельник, 29 Октября 2018 г. 11:08 + в цитатник
Reading Time: 7 MIn
Some of you know I have been recently experimenting a bit more with WebXR than a WebVR and when we talk about mobile Mixed Reality, ARkit and ARCore is something which plays a pivotal role to map and understand the environment inside our applications.

I am planning to write a series of blog posts on how you can start developing WebXR applications now and play with them starting with the basics and then going on to using different features of it. But before that, I planned to pen down this series of how actually the "world mapping" works in arcore and arkit. So that we have a better understanding of the Mixed Reality capabilities of the devices we will be working with.

Mapping: feature detection and anchors

Creating apps that work seamlessly with arcore/kit requires a little bit of knowledge about the algorithms that work in the back and that involves knowing about Anchors.

What are anchors:

Anchors are your virtual markers in the real world. As a developer, you anchor any virtual object to the real world and that 3d model will stay glued into the physical location of the real world. Anchors also get updated over time depending on the new information that the system learns. For example, if you anchor a pikcahu 2 ft away from you and then you actually walk towards your pikachu, it realises the actual distance is 2.1ft, then it will compensate that. In real life we have a static coordinate system where every object has it's own x,y,z coordinates. Anchors in devices like HoloLens override the rotation and position of the transform component.

How Anchors stick?

If we follow the documentation in Google ARCore then we see it is attached to something called "trackables", which are feature points and planes in an image. Planes are essentially clustered feature points. You can have a more in-depth look at what Google says ARCore anchor does by reading their really nice Fundamentals. But for our purposes, we first need to understand what exactly are these Feature Points.

Feature Points: through the eyes of computer vision

Feature points are distinctive markers on an image that an algorithm can use to track specific things in that image. Normally any distinctive pattern, such as T-junctions, corners are a good candidate. They lone are not too useful to distinguish between each other and reliable place marker on an image so the neighbouring pixels of that image are also analyzed and saved as a descriptor.
Now a good anchor should have reliable feature points attached to it. The algorithm must be able to find the same physical space under different viewpoints. It should be able to accommodate the change in 
  • Camera Perspective
  • Rotation
  • Scale
  • Lightning
  • Motion blur and noise
Reliable Feature Points
This is an open research problem with multiple solutions on board. Each with their own set of problems though. One of the most popular algorithms stem from this paper by David G. Lowe at IJCV is called Scale Invariant Feature Transform (SIFT). Another follow up work which claims to have even better speed was published in ECCV in 2006 called Speeded Up Robust Features (SURF) by Bay et al. Thought both of them are patented at this point. 

Microsoft Hololens doesn't really need to do all this heavy lifting since it can rely on an extensive amount of sensor data. Especially it gets aid from the depth sensor data from its Infrared Sensors. However, ARCore and ARkit doesn't enjoy those privileges and has to work with 2d images. Though we cannot say for sure which of the algorithm is actually used for ARKit or ARCore we can try to replicate the procecss with a patent-free algorithm to understand how the process actually works.

D.I.Y Feature Detection and keypoint descriptors

To understand the process we will use an algorithm by Leutenegger et al called BRISK. To detect feature we must follow a multi-step process. A typical algorithm would adhere to the following two steps
  1. Keypoint Detection: Detecting keypoint can be as simple as just detecting corners. Which essentially evaluates the contrast between neighbouring pixels. A common way to do that in a large-scale image is to blur the image to smooth out pixel contrast variation and then do edge detection on it. The rationale for this is that you would normally want to detect a tree and a house as a whole to achieve reliable tracking instead of every single twig or window. SIFT and SURF adhere to this approach. However, for real-time scenario blurring adds a compute penalty which we don't want. In their paper "Machine Learning for High-Speed Corner Detection" Rosten and Drummond proposed a method called FAST which analyzes the circular surrounding if each pixel p. If the neighbouring pixels brightness is lower/higher than  and a certain number of connected pixels fall into this category then the algorithm found a corner. 
    Image credits: Rosten E., Drummond T. (2006) Machine Learning for High-Speed Corner Detection.. Computer Vision – ECCV 2006. 
    Now back to BRISK, for it out of the 16-pixel circle, 9 consecutive pixels must be brighter or darker than the central one. Also BRISk uses down-sized images allowing it to achieve better invariance to scale.
  2. Keypoint Descriptor: The primary property of the detected keypoints should be that they are unique. The algorithm should be able to find the feature in a different image with a different viewpoint, lightning. BRISK concatenates the brightness comparison results between different pixels surrounding the centre keypoint and concatenates them to a 512 bit string.
    Image credits: S. Leutenegger, M. Chli and R. Y. Siegwart, “BRISK: Binary Robust invariant scalable keypoints”, 2011 International Conference on Computer Vision
    As we can see from the sample figure, the blue dots create the concentric circles and the red circles indicate the individually sampled areas. Based on these, the results of the brightness comparison are determined. BRISK also ensures rotational invariance. This is calculated by the largest gradients between two samples with a long distance from each other.

Test out the code

To test out the algorithms we will use the reference implementations available in OpenCV. We can use a pip install to install it with python bindings

As test image, I used two test images I had previously captured in my talks at All Things Open and TechSpeaker meetup. One is an evening shot of Louvre, which should have enough corners as well as overlapping edges, and another a picture of my friend in a beer garden in portrait mode. To see how it fares against the already existing blur on the image.

Original Image Link

Original Image Link
Visualizing the Feature Points
We use the below small code snippet to use BRISK on the two images above to visualize the feature points


What the code does:
  1. Loads the jpeg into the variable and converts into grayscale
  2. Initialize BRISK and run it. We use the paper suggested 4 octaves and an increased threshold of 70. This ensures we get a low number but highly reliable key points. As we will see below we still got a lot of key points
  3. We used the detectAndCompute() to get two arrays from the algo holding both the keypoints and their descriptors
  4. We draw the keypoints at their detected positions indicating keypoint size and orientation through circle diameter and angle, The "DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS" does that.


As you can see most of the key points are visible at the castle edges and all visible edges for Louvre and almost none on the floor. With the portrait mode pic of my friend it's more diverse but also shows some false positives like the reflection on the glass. Considering how BRISK works, this is normal.

Conclusion

In this first part of understanding what power ARCore/Kit we understood how basic feature detection works behind the scenes and tried our hands on to replicate that. These spatial anchors are vital to the virtual objects "glueing" to the real world. This whole demo and hands-on now should help you understand why designing your apps so that users place objects in areas where the device has a chance to create anchors is a good idea.
Placing a lot of objects in a single non-textured smooth plane may produce inconsistent experience since its just too hard to detect keypoints in a plane surface (now you know why). As a result, the objects may sometimes drift away if tracking is not good enough.
If your app design encourages placing objects near corners of floor or table then the app has a much better chance of working reliably. Also, Google's guide on anchor placement is an excellent read.

Their short recommendation is:
  • Release spatial anchors if you don’t need them. Each anchor costs CPU cycles which you can save
  • Keep the object close to the anchor. 
On our next post, we will see how we can use this knowledge to do basic SLAM.

https://blog.rabimba.com/2018/10/arcore-and-arkit-what-is-under-hood.html


Rabimba: ARCore and Arkit: What is under the hood : Anchors and World Mapping (Part 1)

Понедельник, 29 Октября 2018 г. 11:08 + в цитатник
Reading Time: 7 MIn
Some of you know I have been recently experimenting a bit more with WebXR than a WebVR and when we talk about mobile Mixed Reality, ARkit and ARCore is something which plays a pivotal role to map and understand the environment inside our applications.

I am planning to write a series of blog posts on how you can start developing WebXR applications now and play with them starting with the basics and then going on to using different features of it. But before that, I planned to pen down this series of how actually the "world mapping" works in arcore and arkit. So that we have a better understanding of the Mixed Reality capabilities of the devices we will be working with.

Mapping: feature detection and anchors

Creating apps that work seamlessly with arcore/kit requires a little bit of knowledge about the algorithms that work in the back and that involves knowing about Anchors.

What are anchors:

Anchors are your virtual markers in the real world. As a developer, you anchor any virtual object to the real world and that 3d model will stay glued into the physical location of the real world. Anchors also get updated over time depending on the new information that the system learns. For example, if you anchor a pikcahu 2 ft away from you and then you actually walk towards your pikachu, it realises the actual distance is 2.1ft, then it will compensate that. In real life we have a static coordinate system where every object has it's own x,y,z coordinates. Anchors in devices like HoloLens override the rotation and position of the transform component.

How Anchors stick?

If we follow the documentation in Google ARCore then we see it is attached to something called "trackables", which are feature points and planes in an image. Planes are essentially clustered feature points. You can have a more in-depth look at what Google says ARCore anchor does by reading their really nice Fundamentals. But for our purposes, we first need to understand what exactly are these Feature Points.

Feature Points: through the eyes of computer vision

Feature points are distinctive markers on an image that an algorithm can use to track specific things in that image. Normally any distinctive pattern, such as T-junctions, corners are a good candidate. They lone are not too useful to distinguish between each other and reliable place marker on an image so the neighbouring pixels of that image are also analyzed and saved as a descriptor.
Now a good anchor should have reliable feature points attached to it. The algorithm must be able to find the same physical space under different viewpoints. It should be able to accommodate the change in 
  • Camera Perspective
  • Rotation
  • Scale
  • Lightning
  • Motion blur and noise
Reliable Feature Points
This is an open research problem with multiple solutions on board. Each with their own set of problems though. One of the most popular algorithms stem from this paper by David G. Lowe at IJCV is called Scale Invariant Feature Transform (SIFT). Another follow up work which claims to have even better speed was published in ECCV in 2006 called Speeded Up Robust Features (SURF) by Bay et al. Thought both of them are patented at this point. 

Microsoft Hololens doesn't really need to do all this heavy lifting since it can rely on an extensive amount of sensor data. Especially it gets aid from the depth sensor data from its Infrared Sensors. However, ARCore and ARkit doesn't enjoy those privileges and has to work with 2d images. Though we cannot say for sure which of the algorithm is actually used for ARKit or ARCore we can try to replicate the procecss with a patent-free algorithm to understand how the process actually works.

D.I.Y Feature Detection and keypoint descriptors

To understand the process we will use an algorithm by Leutenegger et al called BRISK. To detect feature we must follow a multi-step process. A typical algorithm would adhere to the following two steps
  1. Keypoint Detection: Detecting keypoint can be as simple as just detecting corners. Which essentially evaluates the contrast between neighbouring pixels. A common way to do that in a large-scale image is to blur the image to smooth out pixel contrast variation and then do edge detection on it. The rationale for this is that you would normally want to detect a tree and a house as a whole to achieve reliable tracking instead of every single twig or window. SIFT and SURF adhere to this approach. However, for real-time scenario blurring adds a compute penalty which we don't want. In their paper "Machine Learning for High-Speed Corner Detection" Rosten and Drummond proposed a method called FAST which analyzes the circular surrounding if each pixel p. If the neighbouring pixels brightness is lower/higher than  and a certain number of connected pixels fall into this category then the algorithm found a corner. 
    Image credits: Rosten E., Drummond T. (2006) Machine Learning for High-Speed Corner Detection.. Computer Vision – ECCV 2006. 
    Now back to BRISK, for it out of the 16-pixel circle, 9 consecutive pixels must be brighter or darker than the central one. Also BRISk uses down-sized images allowing it to achieve better invariance to scale.
  2. Keypoint Descriptor: The primary property of the detected keypoints should be that they are unique. The algorithm should be able to find the feature in a different image with a different viewpoint, lightning. BRISK concatenates the brightness comparison results between different pixels surrounding the centre keypoint and concatenates them to a 512 bit string.
    Image credits: S. Leutenegger, M. Chli and R. Y. Siegwart, “BRISK: Binary Robust invariant scalable keypoints”, 2011 International Conference on Computer Vision
    As we can see from the sample figure, the blue dots create the concentric circles and the red circles indicate the individually sampled areas. Based on these, the results of the brightness comparison are determined. BRISK also ensures rotational invariance. This is calculated by the largest gradients between two samples with a long distance from each other.

Test out the code

To test out the algorithms we will use the reference implementations available in OpenCV. We can use a pip install to install it with python bindings

As test image, I used two test images I had previously captured in my talks at All Things Open and TechSpeaker meetup. One is an evening shot of Louvre, which should have enough corners as well as overlapping edges, and another a picture of my friend in a beer garden in portrait mode. To see how it fares against the already existing blur on the image.

Original Image Link

Original Image Link
Visualizing the Feature Points
We use the below small code snippet to use BRISK on the two images above to visualize the feature points


What the code does:
  1. Loads the jpeg into the variable and converts into grayscale
  2. Initialize BRISK and run it. We use the paper suggested 4 octaves and an increased threshold of 70. This ensures we get a low number but highly reliable key points. As we will see below we still got a lot of key points
  3. We used the detectAndCompute() to get two arrays from the algo holding both the keypoints and their descriptors
  4. We draw the keypoints at their detected positions indicating keypoint size and orientation through circle diameter and angle, The "DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS" does that.


As you can see most of the key points are visible at the castle edges and all visible edges for Louvre and almost none on the floor. With the portrait mode pic of my friend it's more diverse but also shows some false positives like the reflection on the glass. Considering how BRISK works, this is normal.

Conclusion

In this first part of understanding what power ARCore/Kit we understood how basic feature detection works behind the scenes and tried our hands on to replicate that. These spatial anchors are vital to the virtual objects "glueing" to the real world. This whole demo and hands-on now should help you understand why designing your apps so that users place objects in areas where the device has a chance to create anchors is a good idea.
Placing a lot of objects in a single non-textured smooth plane may produce inconsistent experience since its just too hard to detect keypoints in a plane surface (now you know why). As a result, the objects may sometimes drift away if tracking is not good enough.
If your app design encourages placing objects near corners of floor or table then the app has a much better chance of working reliably. Also, Google's guide on anchor placement is an excellent read.

Their short recommendation is:
  • Release spatial anchors if you don’t need them. Each anchor costs CPU cycles which you can save
  • Keep the object close to the anchor. 
On our next post, we will see how we can use this knowledge to do basic SLAM.

Update: The next post lives here: https://blog.rabimba.com/2018/10/arcore-and-arkit-SLAM.html

https://blog.rabimba.com/2018/10/arcore-and-arkit-what-is-under-hood.html


Cameron Kaiser: And now for something completely different: Make your radio station Power Mac your recording radio station Power Mac

Суббота, 27 Октября 2018 г. 09:31 + в цитатник
So let's say you've turned your Power Mac into your household radio station. If you're like me and you use the Mac as a repeater to transmit a distant station into your house, you might want to record that audio. Now that Clear Channel iHeartMedia is locking down a lot of their podcasts to their app, for example, you can make an end run around their forcible registration crap by just recording content off the air yourself.

My machine uses a RadioSHARK 2 as its repeater source as previously mentioned in our last article on this topic, and if it were simply a matter of recording from that, you could just (duh) use the RadioSHARK's own software for that purpose as designed. But on my machine, I'm using the Shark with my own custom software to tune and play through USB audio; the RadioSHARK software isn't even involved.

The easiest way to skin this cat, especially on a stand-alone machine which isn't doing anything else, is just to AppleScript QuickTime and use that to do the recording. Unfortunately most of the how-tos you'll find to do this don't work for QuickTime 7 because the dictionary became wildly different (better, too, but different). Here's a quick AppleScript to make QuickTime 7 record 60 seconds from the currently set audio input:


tell application "QuickTime Player"
activate
close every window saving no

new audio recording
start (first document)
delay 60
stop (first document)

close (first document) saving no
quit
end tell
What this will do is close all open documents (just in case, to have a predictable state), then create a new audio recording, start it, record 60 seconds from the default audio input, stop it, and then save it to the Desktop as an audio-only QuickTime movie named something like Audio.mov or Audio 2.mov, etc. Despite the saving no at the end, the file actually is saved, in fact at the point where the recording is stopped no matter what you actually do at the time you close it.

If you don't like restricting this to the "first document," you can also do something like set new_movie to id of front movie to get the ID of what's recording, and then use start movie id new_movie and so forth to reference it specifically. Modifying this for the general case without having to close windows and so on is left as an exercise for the reader.

On my radio station Mac, I have a cron job that pipes this to osascript (the commandline AppleScript runtime) to record certain radio shows at certain times, and then copies the resulting file off somewhere for me to play later. There doesn't seem to be a way in this version of QuickTime to change the default filename, but since I don't use the system to record any other audio, I always know the file will be stored as ~/Desktop/Audio.mov and can just move that. Best of all, by using QuickTime to do this job while the USB audio streaming daemon is running, I can still listen simultaneously while it records if I like.

Now, if you'll excuse me, I've got some queued up Handel on the Law to listen to, simultaneously the best and worst legal show on radio.

http://tenfourfox.blogspot.com/2018/10/and-now-for-something-completely.html


Firefox Nightly: These Weeks in Firefox: Issue 48

Суббота, 27 Октября 2018 г. 03:32 + в цитатник

Mozilla Addons Blog: Firefox, Chrome and the Future of Trustworthy Extensions

Суббота, 27 Октября 2018 г. 00:00 + в цитатник

Browser extensions are wonderful. Nearly every day I come across a new Firefox extension that customizes my browser in some creative way I’d never even considered. Some provide amusement for a short time, while others have become indispensable to my work and life. Extensions are a real-world manifestation of one of Mozilla’s core principles — that individuals must have the ability to shape the internet and their experiences on it.

Another of Mozilla’s core principles is that an individual’s security and privacy on the internet are fundamental and must not be treated as optional. We’ve made the decision to support extensions, but it is definitely a balancing act. Our users’ freedom to customize their browser – their “user agent” – and to personalize their experience on the web can also be exploited by malicious actors to compromise users’ security and privacy.

At Mozilla, we continually strive to honor both principles. It’s why Firefox extensions written to the WebExtensions API are limited in their abilities and have good oversight, including automatic and manual review. It’s also why we make sure users can understand exactly what permissions they’ve granted to those extensions and what parts of their browser they can access.

In short, Mozilla makes every effort to ensure that the extensions we offer are trustworthy.

So it was with great interest that I read Google’s recent Chromium Blog blog post entitled “Trustworthy Chrome Extensions, by default.” It outlines upcoming changes to Chrome’s extension architecture designed to make “extensions trustworthy by default.” I thought it would be interesting to explore each of the announced changes and compare them to what Mozilla has built into Firefox.

User Controls for Host Permissions

“Beginning in Chrome 70, users will have the choice to restrict extension host access to a custom list of sites, or to configure extensions to require a click to gain access to the current page.”

Being able to review and modify the sites that an extension has access to, especially those extensions that ask to “access your data for all websites,” is a worthy goal. Mozilla has discussed similar ideas, but the problem always comes down presenting this in a clear, uncomplicated way to a majority of users.

Having played a bit with this feature in Chrome, the implementation definitely seems targeted at power users. Extensions that request access to all websites still get installed with that access, so the default behavior has not changed.

The click-to-script option is intriguing, although the UX is a bit awkward. It’s workable if you have a single extension, but becomes unwieldy to click and reload every site visited for every installed extension.

Admittedly, getting this interface right in an intuitive and easy-to-use manner is not straightforward and I applaud Google for taking a shot at it. Meanwhile Mozilla will continue to look for ways Firefox can provide more permission control to a majority of extension users.

Extension Review Process

“Going forward, extensions that request powerful permissions will be subject to additional compliance review.”

The post is vague about exactly what this means, but it likely means these extensions will be flagged for manual review. This brings Chrome up to the standard that Firefox set last year, which is great news for the web. More manual review means fewer malicious extensions.

“We’re also looking very closely at extensions that use remotely hosted code, with ongoing monitoring.”

Firefox expressly forbids remotely hosted code. Our feeling is that no amount of review can eliminate the risks introduced when developers can easily and undetectably change what code is loaded by extensions. Mozilla’s policy ensures that no unreviewed code is ever loaded into the browser, and enforced signatures prevents reviewed code from being altered after release.

Code Readability Requirements

“Starting today, Chrome Web Store will no longer allow extensions with obfuscated code…minification will still be allowed.”

In reality, minified and obfuscated code are not very useful in extensions. In both Chrome and Firefox, extensions load locally (not over the network) so there is almost no performance advantage to minification, and obfuscation can be overcome by a dedicated person with readily available tools and sufficient effort.

Nevertheless, Mozilla permits both obfuscated and minified extensions in our store. Critically, though, Mozilla requires all developers to submit original, non-obfuscated, non-minified code for review, along with instructions on how to reproduce (including any obfuscation or minification) the store version. This ensures that reviewers are able to review and understand every extension, and that the store version is unaltered from the reviewed version.

As you might expect, this takes a significant investment of time and energy for both Mozilla and developers. We believe it is worth it, though, to allow developers to secure their code, if desired, while simultaneously providing thoroughly reviewed extensions that maintain user security and privacy.

Required 2-Step Verification

As a whole, the web is moving in this direction and requiring it for developer accounts is a strong step towards protecting users. Mozilla recently added two-step authentication for Firefox Sync accounts, and two-step authentication for Firefox extension developers is on the roadmap for the fourth quarter of 2018. Like Google, we expect to have this feature enabled by 2019.

Manifest v3

“In 2019 we will introduce the next extensions manifest version…We intend to make the transition to manifest v3 as smooth as possible and we’re thinking carefully about the rollout plan.”

In 2015, Mozilla announced we were deprecating our extremely popular extension system in favor of WebExtensions, an API compatible with Chrome, as well as Edge and Opera. There were several reasons for this, but a large part of the motivation was standards — a fundamental belief that adopting the API of the market leader, in effect creating a de facto standard, was in the best interests of all users.

It was a controversial decision, but it was right for the web and it represents who Mozilla is and our core mission. Three years later, while there still isn’t an official standard for browser extensions, the web is a place where developers can quickly and easily create cross-browser extensions that run nearly unchanged on every major platform.

So I would like to publicly invite Google to collaborate with Mozilla and other browser vendors on manifest v3. It is an incredible opportunity to show that Chrome embodies Google’s philosophy to “focus on the user,” would reaffirm the Chrome team’s commitment to open standards and an interoperable web, and be a powerful statement that working together on the future of browser extensions is in the best interests of a healthy internet.

Conclusion

While all of the changes Google outlined are interesting, some of them could go a step further in protecting users online. Nevertheless, I’d like say — bravo! The motivation behind these changes is definitely in the spirit of Mozilla’s mission and a gain for the open web. With Chrome’s market share, these initiatives will have a positive impact in protecting the security and privacy of millions of users around the world, and the web will be a better place for it.

A lot of work remains, though. Expect Mozilla to keep fighting for users on the web, launching new initiatives, like Firefox Monitor, to keep people safe, and advancing Firefox to be the best user agent you can have in your online journies.

The post Firefox, Chrome and the Future of Trustworthy Extensions appeared first on Mozilla Add-ons Blog.

https://blog.mozilla.org/addons/2018/10/26/firefox-chrome-and-the-future-of-trustworthy-extensions/


The Mozilla Blog: Women Who Tech and Mozilla Announce Winners of Women Startup Challenge Europe

Пятница, 26 Октября 2018 г. 15:53 + в цитатник

Europe was at the center of a milestone for women in tech today as nonprofit Women Who Tech and tech giant Mozilla announced the winners of the Women Startup Challenge Europe. Women-led startup finalists from across Europe pitched their ventures before a prestigious panel of tech industry executives and investors on 25 October at Paris’s City Hall, co-hosted by the office of Mayor Anne Hidalgo.

“While it’s alarming to see the amount of funding for women-led startups compared to European companies as a whole go down from 14% to 11% between 2016 and 2018, the Women Startup Challenge is on a mission to close the funding gap once and for all. If the tech world wants to innovate and solve the world’s toughest problems and generate record returns, they will invest in diverse startups,” said Allyson Kapin, founder of Women Who Tech. “If investors don’t know where to look, our Women Startup Challenge program has a pipeline of over 2,300 women-led ventures who are ready to scale.”

Sampson Solutions from the UK won the grand prize, receiving $35,000 in funding via Women Who Tech to help scale their startup. The Audience Choice Award went to Inorevia from Paris, France. Mozilla awarded an additional $25,000 cash grant to Vitrue from the UK, selected by jury member Mitchell Baker, Chairwoman of Mozilla.

“Paris is determined to provide girls and women with the resources to occupy their rightful place in the society and in the tech industry. We were thrilled to co-host the Women Startup Challenge Europe and showcase 10 talented women-led startups who are making an impact in this world,” said Deputy Mayor Jean-Louis Missika.

  • Grand prize winner Sampson Solutions capitalizes on business opportunities precipitated by the Paris climate agreement. Founded by Colleen Becker, Sampson Solutions is creating bio-based construction materials from sustainable sources using a closed-loop, carbon neutral manufacturing process.
  • The Audience Choice Award was awarded to Inorevia. Inorevia’s work in developing  instruments used for bioassays has resulted in minimizing the cost, time and manipulation needed for next-generation bioassay and precision medicine.
  • Mozilla Prize Winner Vitrue Health is developing computer vision based tools that sit in the background of clinical assessments, autonomously measuring motor function metrics, freeing clinicians to focus on more complex patient interactions, and allowing them to detect and treat degradations in functional health. This improves quality of life and saves millions in healthcare costs.

“I’m honored to award the Mozilla prize for privacy, transparency and accountability to Vitrue Health,” said Mitchell. “Vitrue creates data about mobility capabilities, makes that data accessible and useful, and provides it to patients.  By providing patients with access to their data in a useful way, Vitrue offers us an example of how creating new data — even personal data — can be quite positive when it is handled well.”

The in-person jury included Mitchell Baker, Jean-Louis Missika, Deputy Mayor of Paris, Fatou Diagne, Partner and Cofounder at Bootstrap Europe, Julien Quintard, Managing Director of Techstars Paris, and St'ephanie Hospital, Cofounder and CEO of OneRagtime.

 

The post Women Who Tech and Mozilla Announce Winners of Women Startup Challenge Europe appeared first on The Mozilla Blog.

https://blog.mozilla.org/blog/2018/10/26/women-who-tech-and-mozilla-announce-winners-of-women-startup-challenge-europe/


Mozilla GFX: WebRender newsletter #27

Пятница, 26 Октября 2018 г. 14:55 + в цитатник

K Lars Lohn: Things Gateway - Running Web Thing API Applications in Python

Четверг, 25 Октября 2018 г. 22:11 + в цитатник
The Web Thing API is a remarkable framework for creating applications that can control smart home devices.  Any language that can speak to a RESTful API or use Web Sockets can participate.

In the last few months I've been exploring the use of the Web Thing API using the Python language. After lots of trial and error, I've made some abstractions to simplify interacting with my smart home devices.  While I've been blogging about my explorations for quite a while, I've not made a concerted effort to make it easy for anyone else to follow in my foot steps.  I'm correcting that today, though I fear I may fail on the "easy" part.

This is a guide to help you accept my invitation to explore with me.  However, I need to be clear, this is a journey for programmers familiar with the Python programming language and Linux development practices.

Everyone has different needs and a different programming environment.  It is easiest to work with my Python module, pywot if you have a second Linux machine on which to run the pywot scripts.  However, if you only have the Things Gateway Raspberry Pi itself, you can still participate, it'll just be a longer task to setup.

The instructions below will walk you through setting up the Things Gateway Raspberry Pi with the requirements to run pywot scripts.  If you already have a machine available with Python3.6 you can run pywot scripts there instead of on the RaspberryPi, you can save a lot of hassle by skipping all the way down to Step 5.  If you have the second machine and still want to run the scripts on the Raspberry Pi, I suggest enabling SSH on the Things Gateway and doing command line work with ssh


One of the unfortunate things about the Raspbian Stretch Linux distribution on which the Things Gateway is distributed, is a rather outdated version of Python 3.  Version 3.6 of Python was released in 2016, yet Raspbian Stretch still includes only 3.5.  There were a number of important changes to the language between those two versions, especially in the realm of asynchronous programming.  Interacting with the Web of Things is all about asynchronous programming.

When I started experimenting with the Web Thing API, I did so using my Ubuntu based Linux workstation that already had Python 3.6 installed as native.  I blithely used the newer asynchronous constructs in creating my External Rules Framework.  It was a nasty surprise when I moved my code over to the Raspberry Pi running the Things Gateway and found the code wouldn't run.


To get Python 3.6 running on Raspbian Stretch, it must be configured and compiled  from source and then installed as an alternate Python.  Some folks blanched at that last sentence, I certainly did when I realized what I would have to do.  As it turns out, it isn't as onerous as I thought. Searching on the Web for a HOW-TO I found this great page on github.  My use of these instructions went flawlessly - there were neither mysterious failures nor unexpected complications requiring research. 

Here's exactly what I did to get a version of Python 3.6 running on Raspbian Stretch:

1) Connect a keyboard and monitor to your Raspberry Pi.  You'll get a login prompt.  The default user is "pi" and the password is "raspberry".  You really ought to change the default password to something more secure.  See Change Your Default Password for details. (Alternatively, you can enable ssh and login from another machine.  From your browser, go to http://gateway.local/settings, select "Developer" and then click the checkbox for "Enable SSH")

2) Once logged in, you want to make sure the RPi is fully updated and install some additional packages.  For me, this took about 10 minutes.
        
pi@gateway:~ $ sudo apt-get update
pi@gateway:~ $ sudo apt-get install build-essential tk-dev libncurses5-dev libncursesw5-dev libreadline6-dev libdb5.3-dev libgdbm-dev libsqlite3-dev libssl-dev libbz2-dev libexpat1-dev liblzma-dev zlib1g-dev


3) Now you've got to download Python 3.6 and build it. 
The last command "./configure" took nearly 4 minutes on my RPi:
        
pi@gateway:~ $ wget https://www.python.org/ftp/python/3.6.6/Python-3.6.6.tar.xz
pi@gateway:~ $ tar xf Python-3.6.6.tar.xz
pi@gateway:~ $ cd Python-3.6.6
pi@gateway:~/Python-3.6.6 $ ./configure

The next step is to compile it. This takes a lot of time. Mine ran for just under 30 minutes:
        
pi@gateway:~/Python-3.6.6 $ make

Now you've got to tag this version of Python as an alternative to the default Python. This command took just over 4 minutes on my RPi:
        
pi@gateway:~/Python-3.6.6 $ sudo make altinstall

Finally, a bit of clean up:
        
pi@gateway:~/Python-3.6.6 $ cd ..
pi@gateway:~ $ rm Python-3.6.6.tar.xz
pi@gateway:~ $ sudo rm -r Python-3.6.6
pi@gateway:~ $


4) Python 3.6 is now installed along side the native 3.5 version.  Invoking the commond python3 will give you the native 3.5 version, where python3.6 will give you our newly installed 3.6 version.  It would be nice to have version 3.6 be the default for our work.  You can do that with a virtual environment:
        
pi@gateway:~ $ python3.6 -m venv py36

This has given you a private version of Python3.6 that you can customize at will without interfering with any other Python applications that may be tied to specific versions.  Each time you want to run programs with this Python3.6 virtual environment, you need to activate it:
        
pi@gateway:~ $ . ~/py36/bin/activate
(py36) pi@gateway:~ $


It would be wise to edit your .bashrc or other initialization file to make an alias for that command. If you do not, you'll have to remember that somewhat cryptic invocation.

5) It's time to install my pywot system.  This is the Python source code that implements my experiments with the Things Gateway using the Web Thing API.

I've not uploaded pywot to PyPI.  I've chosen not to productize this code because it's what I call stream of consciousness programming.  The code is the result of me hacking and experimenting.  I'm exploring the problem space looking for interesting and pleasing implementations.  Maybe someday it'll be the basis for a product, but until then, no warranty is expressed or implied.

Even though pywot isn't on PyPI, you still get to use the pip command to install it.  You're going to get a full git clone of my public pywot repo.  Since it's pip, all the dependencies will automatically download and install into the virtual Python3.6 environment. On my Raspberry Pi, this command took more than five minutes to execute.

        
(py36) pi@gateway:~ $ mkdir dev
(py36) pi@gateway:~ $ cd dev
(py36) pi@gateway:~/dev $ git clone https://github.com/twobraids/pywot.git
(py36) pi@gateway:~/dev $ pip install -e pywot
(py36) pi@gateway:~/dev $ 

Did you get a message saying, "You should consider upgrading via the 'pip install --upgrade pip' command." ?   I suggest that you do not do that.  It made a mess when I tried it and I'm not too inclined to figure out why.  Things will work fine if you skip that no-so-helpful suggestion. 

6) Before you can run any of the pywot demos or write your own apps, you need to get the Things Gateway to grant you permission to talk to it.  Normally, one gets the Authorization Token by accessing the Gateway using a browser.  That could be difficult if the RasberryPi that has your Gateway on it is your only computer other than a mobile device.  Manually typing a 235 character code from your phone screen would be vexing to say the least.

Instead, run this script adding the url to your instance of the Things Gateway, and your Things Gateway login and password.  Take note: if you're running this on the Raspberry Pi that runs the Gateway, the url should have ":8080" appended to it.  If you are using some other machine, you do not need that.
        
(py36) pi@gateway:~/dev $ . ./pywot/demo/auth_key.sh
Enter URL: http://gateway.local:8080
Enter email: your.email@somewhere.com
Enter password: your_password
(py36) pi@gateway:~/dev $ ls sample_auth.ini
sample_auth.ini
(py36) pi@gateway:~/dev $


This command created a file called ~/dev/sample_auth.ini  You will use the data authorization key within that file in the configuration files used by the demo apps and the apps you create.

7) Finally, it's time to start playing with the demos.  Since everyone has a different set of devices, all of the demos in the ~/dev/pywot/demo and ~/dev/pywot/demo/rule_system will require some modification.  To me, the most interesting demos are those in the latter directory.

All of the demo files use configman to control configuration.  This gives each script command line switches and the ability to use environment variables and configuration files.  All configuration parameters can acquire their values using any of those three methods.  Conflicts are resolved with this hierarchy:
  1. command line switches, 
  2. configuration file, 
  3. environment variables, 
  4. program defaults.  
If you want more information about configman, see my 2014 PyOhio presentation.

--help  will always show you what configuration options are available
--admin.config= will specify a configuration file from which to load values
--admin.dump_config= will create a configuration file for you that you can customize with an editor.

Start with the simplest rule example: ~/dev/pywot/demo/rule_system/example_if_rule.py.  Run it to produce a blank configuration file.
        
(py36) pi@gateway:~/dev $ cd ./demo/rule_system
(py36) pi@gateway:~/dev/demo/rule_system $ ./example_if_rule.py --admin.dump_conf=example.ini
(py36) pi@gateway:~/dev $ cat example.ini
# a URL for fetching all things data
#http_things_gateway_host=http://gateway.local

# the name of the timezone where the Things are ('US/Pacific, UTC, ...')
local_timezone=US/Pacific

# format string for logging
#logging_format=%(asctime)s %(filename)s:%(lineno)s %(levelname)s %(message)s

# log level (DEBUG, INFO, WARNING, ERROR, CRITICAL)
#logging_level=DEBUG

# the fully qualified name of the RuleSystem class
#rule_system_class=pywot.rules.RuleSystem

# the number of seconds to allow for fetching data
#seconds_for_timeout=10

# the name of the default timezone running on the system ('US/Pacific, UTC, ...')
system_timezone=UTC

# the api key to access the Things Gateway
#things_gateway_auth_key=THINGS GATEWAY AUTH KEY

(py36) pi@gateway:~/dev/demo/rule_system $

Open example.ini in a text editor of your choice.  Using the Authorization key you generated in the file ~/dev/sample_auth.ini, uncomment and set the value of things_gateway_auth_key.  Set your local timezone on the local_timezone line.  Finally set the value of system_timezone.  If you're using the Gateway's Raspberry Pi, you can leave it as UTC  Otherwise set it whatever timezone your system is using.

Edit the source file ~/dev/pywot/demo/rule_system/example_if_rule.py  and change the names of the devices to reflect the names of the devices you have in your smart home setup.

If you've not yet expired of old age after all these things you've had to do, it is finally time to actually run the example:
        
(py36) pi@gateway:~/dev $ ./example_if_rule.py --admin.config=example.ini

The script will echo its configuration to the log and then start listening to the Thing Gateway. As soon as your target light bulb is turned on, the other bulb(s) in the action will also turn on.  You can explore the rest of the demos using the same method of creating configuration files.


While not a polished product, my pywot Python module is useful for demonstrating the power of the Web Thing API.  Fortunately, the Web Thing API is an open standard that could be implemented by anyone.  The Python based Home Assistant (HASS) has plans to integrate it.  With a faithful implementation of the standard, pywot could be used as a scripting or rule engine for HASS, or any compliant platform.  Cross compatibility and letting everyone join in the fun is our goal.

http://www.twobraids.com/2018/10/things-gateway-running-web-things-api.html


Wladimir Palant: Should your next web-based login form avoid sending passwords in clear text?

Четверг, 25 Октября 2018 г. 13:38 + в цитатник

TL;DR: The answer to the question in the title is most likely “no.” While the OPAQUE protocol is a fascinating approach to authentication, for web applications it doesn’t provide any security advantages.

I read an interesting post by Matthew Green where he presents ways to authenticate users by password without actually transmitting the password to the server, in particular a protocol called OPAQUE. It works roughly like that:

The server has the user’s salt and public key, the client knows the password. Through application of some highly advanced magic, a private key materializes in the client, matching the public key known to the server. This only works if the password known to the client is correct, yet the client doesn’t learn the salt and the server doesn’t learn the password in the process. From that point on, the client can sign any requests sent to the server, and the server can verify them as belonging to this user.

The fact that you can do it like this is amazing. Yet the blog post seems to suggest that websites should adopt this approach. I wrote a comment mentioning this being pointless. The resulting discussion with another commenter made obvious that the fundamental issues of browser-based cryptography that I first saw mentioned in Javascript Cryptography Considered Harmful (2011) still aren’t widely known.

What are we protecting against?

Before we can have a meaningful discussion on the advantages of an approach we need to decide: what are the scenarios we are protecting against? In 2018, there is no excuse for avoiding HTTPS, so we can assume that any communication between the client and the server is encrypted. Even if the server receives the password in clear text, a responsible implementation will always hash the password before storing it in the database. So the potential attacks seem to be:

  • The server is compromised, either because of being hacked or as an inside job. So the attackers already have all the data, but they want to have your password as well. The password is valuable to them either because of password reuse (they could take over accounts on other services) or because parts of the data are encrypted on the server and the password is required for decryption. So they intercept the password as it comes in, before it is hashed.
  • Somebody succeeded with a Man-in-the-Middle attack on your connection, despite HTTPS. So they can inspect the data being sent over the connection and recover your password in particular. With that password they can log into your account themselves.
  • A rather unlikely scenario: a state-level actor recorded the (encrypted) contents of your HTTPS connection and successfully decrypted them after a lengthy period of time. They can now use your password to log into your account.

Does OPAQUE help in these scenarios?

With OPAQUE, the password is never sent to the server, so it cannot be intercepted in transit. However, with web applications the server controls both the server and the client side. So all it has to do is giving you a slightly modified version of its JavaScript code on the login page. That code can then intercept the password as you enter it into the login form. The user cannot notice this manipulation, with JavaScript code often going into megabytes these days, inspecting it every time just isn’t possible. Monitoring network traffic won’t help either if the data being sent is obfuscated.

This is no different with the Man-in-the-Middle attack, somebody who managed to break up your HTTPS connection will also be able to modify JavaScript code in transit. So OPAQUE only helps with the scenario where the attacker has to be completely passive, typically because they only manage to decrypt the data after the fact. With this scenario being extremely uncommon compared to compromised servers, it doesn’t justify the significant added complexity of the OPAQUE protocol.

What about leaked databases?

Very often however, the attackers will not compromise a server completely but “merely” extract its database, e.g. via an SQL injection vulnerability. The passwords in this database will hopefully be hashed, so the attackers will run an offline brute-force attack to extract the original passwords: hash various guesses and test whether the resulting hash matches the one in the database. Whether they succeed depends largely on the hashing function used. While storing passwords hashed with a fast hashing function like SHA-1 is only marginally better than storing passwords as clear text, a hashing function that is hard to speed up such as scrypt or argon2 with well-chosen parameters will be far more resilient.

It is a bit surprising at first, but using OPAQUE doesn’t really change anything here. Even though the database no longer stores the password (not even as a hash), it still contains all the data that attackers would need to test their password guesses. If you think about it, there is nothing special about the client. It doesn’t know any salts or other secrets, it only knows the password. So an attacker could just do everything that the client does to test a password guess. And the only factor slowing them down is again the hashing function, only that with OPAQUE this hashing function is applied on the client side.

In fact, it seems that OPAQUE might make things worse in this scenario. The server’s capability for hashing is well-known. It is relatively easy to tell what parameters will be doable, and it is also possible to throw more hardware at the problem if necessary. But what if hashing needs to be done on the client? We don’t know what hardware the client is running, so we have to assume the worst. And the worst is currently a low-end smartphone with a browser that doesn’t optimize JavaScript well. So chances are that a website deploying OPAQUE will choose comparatively weak parameters for the hashing function rather than risk some users to be upset about extensive delays.

Can’t OPAQUE be built into the browser?

Adding OPAQUE support to the browsers would address a part of the performance concerns. Then again, browsers that would add this feature should have highly-optimized JavaScript engines and Web Crypto API already. But the fundamental issue is passwords being entered into untrusted user interface, so the browser would also have to take over querying the password, probably the way it is done for HTTP authentication (everybody loves those prompts, right?). A compromised web server could still show a regular login form instead, but maybe the users will suspect something then? Yeah, not likely.

But wait, there is another issue. The attacker in the Man-in-the-Middle scenario doesn’t really need your password, they merely need a way to access your account even after they got out of your connection. The OPAQUE protocol results in a private key on the client side, and having that private key is almost as good as having the password — it means permanent access to the account. So the browser’s OPAQUE implementation doesn’t merely have to handle the password entry, it also needs to keep the private key for itself and sign requests made by the web application to the server. Doable? Yes, should be. Likely to get implemented and adopted by websites? Rather questionable.

https://palant.de/2018/10/25/should-your-next-web-based-login-form-avoid-sending-passwords-in-clear-text



Поиск сообщений в rss_planet_mozilla
Страницы: 472 ... 355 354 [353] 352 351 ..
.. 1 Календарь