-Поиск по дневнику

Поиск сообщений в rss_planet_mozilla

 -Подписка по e-mail

 

 -Постоянные читатели

 -Статистика

Статистика LiveInternet.ru: показано количество хитов и посетителей
Создан: 19.06.2007
Записей:
Комментариев:
Написано: 7

Planet Mozilla





Planet Mozilla - https://planet.mozilla.org/


Добавить любой RSS - источник (включая журнал LiveJournal) в свою ленту друзей вы можете на странице синдикации.

Исходная информация - http://planet.mozilla.org/.
Данный дневник сформирован из открытого RSS-источника по адресу http://planet.mozilla.org/rss20.xml, и дополняется в соответствии с дополнением данного источника. Он может не соответствовать содержимому оригинальной страницы. Трансляция создана автоматически по запросу читателей этой RSS ленты.
По всем вопросам о работе данного сервиса обращаться со страницы контактной информации.

[Обновить трансляцию]

Christian Heilmann: People on the Edge: Jonathan Sampson

Пятница, 04 Декабря 2015 г. 13:25 + в цитатник

In a new series of posts, I want to introduce the world to people I work with. People who work on the Microsoft Edge browser and related technologies. The faces and voices behind the product.

Jonathan Sampson

Today I’m starting with an interview with Jonathan Sampson (@jonathansampson). Jonathan spends a lot of time on the web and channels like Stack Overflow and Reddit looking out to help people with their questions about Microsoft Edge and browser interoperability. He is also always on the hunt for interesting technology implementations that can be used to test the limits of the browser.

The video is on YouTube.

There’s also an audio version on Archive.org.

Amongst other things, you’ll hear about:

  • How to keep an ear to the ground trying to predict what developers need and want
  • How the work of Ana Tudor is not only pretty but also functions as a great CSS engine benchmark
  • How topics like accessibilty are sadly enough not hot topics and yet need to be high on our radar
  • How IE burned people in the past and how that makes it hard for the Edge team to get listened to
  • How Edge is much more like Webkit than it is like IE
  • How to collect and publish information that answers the how and why rather than just giving a quick answer
  • How to deal with trolls and not to take product criticism as a personal attack
  • How moving out the office makes you learn a lot about what real people use to surf the web

Thanks must go to Jonathan for answering my questions and Seth Juarez and Golnaz Alibeigi of Channel 9 for filming and producing the series.

Transcript (of sorts, created from automated and cleaned captions)

[Chris] Hello, I’m here with Jonathan Sampson. You’re one of those people
that goes on the web alot and has conversations with people on very a ctive channels like Reddit and Stack Overflow. What do you do there?

[Jonathan] I don’t typically go on the web a lot – I just really don’t get off the web. I stay on it pretty much like everybody else all the time. I’m super active on Stackoverflow and Twitter. And Reddit is more of an addiction – I guess – than a job obligation.

But, no, that’s it basically – trying to get a feel for what people are expecting of a browser today. What are developers trying to do that they’re finding very difficult in a browser?

You know, we’ve been making a lot of efforts over the past few years; we kind of want to get a sense for how they’re registering in the community. Some developers may have had difficulty achieving a high degree of interop in the past and of course we want to understand if that is getting better. Is that trend line coming downward? The amount of effort to make something work cross browser. Getting a sense for just what people are playing with. You know we’ve got people out there like Ana Tudor, who is finding out thesemost brilliant ways to to break Edge and other browsers by doing the most complicated CSS stuff in the world.

Absolutely astounding! And we want to see it first and start to try and pave the way to make stuff like that work when it will start to become more mainstream. And so yeah, it is a lot of just moving about the community and seeing what people are doing.

[Chris]: So how do we break that concept of… I find the trend is in the community to impress each other with cool stuff more than actually building things on the web. So, the message of “look I just fixed it to work across browsers” is massively downvoted, whilst, “ok here is a 3d graph with 15 waterfalls and pixelated things in the background and a video playing at the same time” is celebrated.

You just mentioned with Ana, that she’s doing really out there experimentation. How do we get developers more excited about building things for end users again? Cause, it seems like to me we are a very small crowd that talks to each other rather than solving problems by now.

[Jonathan]: Yeah we see that in accessibility. I mean that’s not a very sexy topic but it’s just the type of thing where just a little bit of effort goes a long way.

You know, in the office yesterday had a discussion about hearing impaired users and how they perceive and interact with media. Visually impaired users. It is like that no one is really given these individuals a voice in this whole discussion. Just the other day someone – I was listening to a podcast – was asking “can blind people even go online?”. Of course they can! We have some really amazing screen readers. How do we make ground on on areas that just aren’t super attractive as they’re not super flashy? You know we’re not talking about really radical canvas displays and stuff. In some of these things it’s difficult to to win somebody over because it’s just not appealing and so we just have to do the work regardless.

Right now, in fact, I just came here from another meeting where we’re looking at API support in various browsers. As in, what is the web using versus what is uniquely supported in individual browsers as proprietary stuff? Maybe experimental implementations.

And, if you look at that the graph, the very front of it – all the most popular stuff – is supported everywhere. All the major stuff is supported. There’s a couple things that are only supported in other browsers and some people make really fun demos with those but the fact is, that those aren’t ready for the web today. No one is ready to build a business or or stake their profit margins or anything on something like that. They don’t have a real need. It’s fun, it’s really cool maybe it will be ready tomorrow. It is important having constant communication with people you know trying to share data with them.

I think just having that dialogue goes a long way. Those are difficult discussions to have. Those are exciting things, that just qualitatively don’t add up today for us to spend a whole lot of effort on.

[Chris]: What is the most annoying truism that you see keep getting repeated and you have to tell people continuously: “no it’s actually not that way”

[Jonathan]: I would say, that IE is terrible. But, it is kind of becoming a secondary thing. Although Edge adopted that from IE and a lot of people just carry that over. They just assume that “whoa, Edge is just a rose by a different name” or something. “It is just IE - just with a different name”.

And it’s like “No! you know we we’ve been doing amazing work here. Like we’ve ripped out tens of thousands – hundreds of thousands – of lines of code and put in even more code! Edge has become more WebKit than IE ever was. We ripped out a lot of proprietary ms prefixed stuff. We added a whole bunch of WebKit prefixes for interop purposes. We have today the best ECMAScript 6 support among any other browser and, I think, still any transpiler out there which is really neat.Transpilers have a lot of liberty and a lot of great flexibility.”

When you actually show people the stuff you show them “hey if you want to yo really kind of brush up ES6, the ideal environment right now is Chakra. It has the most vibrant support right now”.

The moment you show people that they blow their heads off and they think this is the most amazing thing in the world. I got no idea and all my presuppositions about IE and Edge in Microsoft just kind of get thrown out the window. IE and Edge and Microsoft have been making some inways to really advance ourself and our craft and everything and so we’ve actually really been pushing the boundaries.

We’ve all been burned over the years we’ve all spent hours and hours and hours on IE6, 7 and 8 and 9 a little bit, too, trying to get things to work right. We realized the gravity of that pain and agony that people feel and it’s burned people to the point where they just don’t really want to take another look. That’s why it is important to find those opportunities to say “hey, have a look today – Edge provides a lot of opportunities – it is a fundamentally different browser”. Which it is and people realise that “Yeah, all broken things I remember there are no longer true. They’re still true about the old, remnant versions of IE that sometimes pop up on the web here and there but they don’t really apply to Edge in this day and age”.

[Chris]: I think it’s a general consensus, that every question that you see on Stack Overflow and others, answer the “how” immediately and not the “why”. Those are the ones that get upvoted. As a developer I don’t think it helps you to know the how without understanding what it does. That’s why we have the all these outdated libraries on the web. That’s why people keep copying and pasting really terrible code. How can we break that spell, how can we make it easier for people to start listening more and also dealing with an answer that is not a straightforward answer but one that makes you think more cause that’s what you learn from?

[Jonathan]: One of the cool things about web development in general – like the thing that really allowed me to get into this industry – is that it’s saturated by amateurs. This is a unique thing about our industry. You don’t have to go to college for eight years to be a successful web developer. One of the downsides of that is when you go out to find information sometimes the the quality of the level that information is is not at that academic level and so on Stack Overflow you get a lot of a lot of questions answered very quickly but again as you point out, there is that qualitative difference substrate of information that you just never get like why is it this way and so you know there are individuals in the community like yourself and many others who have a rich amount of experience and background who can offer that insight and one of things that we’re trying to do is build more resources to get that information that people you know we had a couple phrases a while back you know flavoring the well and welling the flavor and stuff and when you go online it’s very easy to find mediocre information oftentimes very outdated information it’s kind of shallow and it’s like how do we how do we go out and fix that information but we do know bump this up to get more accurate, modernized and stuff and that’s really difficult and so that the other option is how do we create watering holes, new watering holes for people to come to that have really great information in there really a rich depth of insight and so we started over the years building things like status.modern.ie and uservoice. Places in which we can communicate with people in MSDN has got a lot of you know bumps over recent time having higher quality articles the edge blog and stuff allows people to get some deeper insight into why it is we do things a certain way why certain standards developing quicker than others and stuff like that and so i i think just having more of a conversation with community is going you know as far as progress is concerned in that area

[Chris]: Seeing how often you go online and how active you are you must have grown quite a thick skin. How do you think we can we can we work with the problem of trolls? There’s a lot of new developers that do get either shot down or get some really stupid advice from somebody “for the lulz”. It’s great to see somebody do something stupid. How can we break that? Do you think it really is that rampant? Do you see it’s going going downhill? Or do you see it as not that much of a problem?

[Jonathan]: I kind of knew when I came to work at Microsoft that that would be a thing. Like, who in their right mind would want to go work on IE, right? But, honestly, I rarely ever get upset with the trolls. I don’t think they’re intentionally opting in to troll status. Sometimes it’s just lacking education on their end and not acting insult or anything. Web browsers are really big, complicated pieces of software. As web developers, we’re used to authoring projects that are hundreds, thousands of lines of code and we understand what struggle comes in managing those types of projects in that size. We assume that the same thing is true for browser developers. We think “well you know I can I build a website, I’ve contributed to jQuery core, you know, I’ve also helped out with modernizr, I know how hard software development is. Why can’t you guys move as quickly as me and my friends do?”. When you start to look at browsers out there you see just the immense amount of code in there. The size of the teams, the amount of scope, the knowledge people need to have to make this stuff work. And then you start to get appreciation for that. You realize why things work the way they work in this industry. You have your various standards organizations representing different companies – they all have different ideas how to move the web foward. It’s a complicated thing and, so you know, often I encountered someone that would be maybe classified in the taxonomy of a troll is just trying to share a little bit of knowledge they you know. This is a really really hard thing we’re doing. If you look at the history of the web and browsers in general these are really difficult problems. We could gloss over the hard stuff like accessibility. We could say well, you know that such a small percentage of the population and we’re just not gonna worry about that. We’re going to deal with the vast majority of people.

[Chris] This is also the fastest growing part of the community

[Jonathan]: That’s just not how you do quality work. That’s not how you serve humankind. It’s tough – things move a little bit slowly sometimes we’re trying to identify ways in which they can move faster. Things like web components. How within this shell, within this process that we’ve kind of adapted and adopted over the years can we identify ways in which we can move things a little bit quicker? So that we can satisfy the people who are wanting the cadence that they experience in their personal development to be reflected in the cadence of our development? We’re finding ways, JavaScript allowed us that liberty, there is CSS houdini – a task force that is identifying ways in which maybe we can cause the CSS development to move quicker in the future. We’re trying to identify solutions to these which I don’t think many of the trolls realise. They think we’re just sitting here on our hands. I think having a sincere sympathetic conversation with them is important. Saying we understand what it feels like from your side, we genuinely care. We’re trying to make this work a lot better and we’re hiring so if you know a way to help make it go…

[Chris]: OK, let’s stop with the advertising for a second. I know from personal experience that we have a very creative Brazilian Portuguese. Do you also live in other environments than just the English-speaking, American developer market? Do you see differences in worldwide markets and is there something that we can learn in here in America from them?

[Jonathan]: This is actually a really interesting thing: we lived in Brazil for a little while. You know, there is the whole idea that in the in the web today we all want to develop the new and shiny stuff and we expect every browser to just support it. That assumes that everybody with a machine is going to have the latest browser. For instance in Brazil, in the apartment building was working in, they had a business centre downstairs. You go down there and on these machines they have Ubuntu, which was – I think maybe a four years old version of Ubuntu. They had a Firefox 4 or something on there and and I’m like oh my goodness I feel so bad for everyone who has to come down and use this. I’m going to update Ubuntu and update Firefox for them! I couldn’t do it because I didn’t have the credentials and I asked the manager if they have the credentials. I would be happy to get all these machines up to date. Someone set them up a long time ago and and they don’t have any information. But they work so you check your email. We moved back to the States and shortly after that my second child was born a few months premature and so we spent a few weeks in the hospital. At the hospital: again old machinery, old versions of Firefox. It’s hard to look at the world and learn, that it is not as we perceive it from within our fancy offices and my new laptop. The real world out there is vastly more diverse and there’s a broad spectrum of browsers and operating systems and hardware and software capabilities. Oftentimes we just ignore that – we assume that you know. The solution that I have needs everyone running Chrome 46 or Edge. We just assume that’s the way things are and we get we don’t like the idea of it being any other way. We have a cognitive dissonance, I guess. When you look at places like Brazil and other countries you know quite often they’re not running the most recent hardware or software and as a result you know they’re not always getting this type of stuff. We are building and we’re blocking off whole demographics of people who otherwise would love to get our content, that would love to use our services. Just by assuming that in our small sphere here what we experience is appropriate to extrapolate across the whole world we really do ourselves and everybody else a disservice, unfortunately.

[Chris]: There’s a lot of good information in this. Developers have to work in an environment like that some time, to learn that they have to optimise for the unknown and optimised for terrible environments.

So, it’s all about communication, all about a conversation and it’s good that we have Jonathan out there as our a firewall in Stack Overflow and other environments. So if you meet him, be nice to him – we have some good information for you.

https://www.christianheilmann.com/2015/12/04/people-on-the-edge-jonathan-sampson/


Christian Heilmann: A quick reminder on how and why to use labels in forms to make them more accessible

Пятница, 04 Декабря 2015 г. 03:20 + в цитатник

Yesterday the excellent Alice Boxhall of the Google Chrome team pointed out an annoying bug to me:

Seen in the wild on https://wpdev.uservoice.com/forums/257854-microsoft-edge-developer … I agree @codepo8

It seems the UserVoice page of Microsoft Edge has a checkbox that is inaccessible to screen reader users. The reason is a wrong implementation of a label. So, here is a quick reminder of how to use labels in plain HTML (without any ARIA extras) and why that’s a good idea.

Labels are there to connect an explanatory text with a form element. For a sighted user this can seem redundant – after all, the text is right next to the element. But once you use a screen reader you see that by not using labels, you make it impossible for people to use your forms.

The markup used in this demo is the following:

>
  span> type="checkbox" name="wombat"> 
  Yes, I want to buy a wombat!
>
>
  >
    span> type="checkbox" name="quokka">
    Yes, I want to buy a quokka!
  >
>
>
  span> type="checkbox" id="yayforwallaby" 
         name="wallaby">
  span> for="yayforwallaby">
    Yes, I want to buy a wallaby!
  >
>

On my Mac, using the in-built VoiceOver and Firefox, it is easy to test. Simply turn on VoiceOver Command+Fn+F5 and navigate using your keyboard by tabbing into the document. Here’s what that looks and sounds like:

In addition to the benefits for screenreaders, labels also make it easier for mouse or touch users to interact with your content. The following GIF shows that the options having labels enable the user to either click the tiny checkbox or the much larger text to check and uncheck.

showing how checkboxes with labels make the text clickable

Using labels isn’t hard. The simplest way is to nest the form element and the text inside the label:

  >
    span> type="checkbox" name="quokka">
    Yes, I want to buy a quokka!
  >

If you can’t do that, you need to connect the label and the form element using a for/id pairing. Remember, a form element’s id has no meaning to the form. The data sent to the server is what’s defined in the name attribute. The id is only good for scripting, CSS and as a fragment identifier/anchor. The following example shows how to connect the form element and the label:

span> type="checkbox" id="yayforwallaby" 
       name="wallaby">
span> for="yayforwallaby">
  Yes, I want to buy a wallaby!
>

This is what’s broken in the UserVoice example: there is a for attribute on the label, but the form element has no id. Hence there is no connection between the two and the label becomes redundant.

Quick aside: if you wanted to write a test for that, remember that the for attribute in HTML elements can not be accessed as element.for as for is a reserved word. It needs to be htmlFor.

As a way to catch the mistake in the sign-up form, you could do the following:

var labels = document.querySelectorAll('label');
for (var i=0; i<labels.length; i++) {
  if (labels[i].htmlFor) {
    if (!document.getElementById(labels[i].htmlFor)) {
      labels[i].style.background = 'firebrick';
    }
  }
}

Now, go forth and label the web!

Updates

https://www.christianheilmann.com/2015/12/04/a-quick-reminder-on-how-and-why-to-use-labels-in-forms-to-make-them-more-accessible/


John O'Duinn: Increase growth and revenue by becoming distributed

Четверг, 03 Декабря 2015 г. 20:10 + в цитатник

In my recent blog post about the one-time-setup and recurring-costs of an office, I mostly focused on financial costs, human distraction costs, and the cost of increased barriers to hiring. This post talks about another important scenario: when your physical office limits potential company revenue.

Pivigo.com is a company in London, England, which connects companies that need help with data science problems with Ph.D data science graduates who are leaving academia looking for real-world problems to solve. This 2.5 year old company was founded by Dr Kim Nilsson, (ex-astronomer and MBA!), and as of today employs 4 people.

For Pivigo to be viable, Kim needed:

  • a pipeline of companies looking for help with their real-world Data Science problems. No shortage there.
  • a pipeline of Ph.D graduates looking for their “first non-academic” project. No shortage there.
  • a carefully curated staff of people who understand both Academic and Commercial worlds are essential to help keep things on track, and make sure the event is a success for everyone. Kim has been quietly, diligently working on growing a world-class team at Pivigo for years. Tricky, but Pivigo’s hiring has been going great – although they are always interested to meet outstanding people!
  • a physical place where everyone could meet and work together.

Physical space turned out to be the biggest barrier to Pivigo’s growth and was also the root cause of some organizational problems:

1) Venue: The venue Pivigo had guaranteed access to could only be used once a year, so they could only do one “event” each year. Alternate venues they could find were unworkable because of financial costs, or the commute logistics in London. Given they could only have one course per year, it was in Pivigo’s interest to have these classes be as large as possible. However, because of the importance of creating a strong network bonding between the participants, physical size of venue, and limits on skilled human staffing, the biggest they could do was ~80 people in this once-a-year event. These limits on the once-a-year event puts a financial cap on the company’s potential revenue.

2) Staffing: These big once-a-year events were super-disruptive event to all the staff at Pivigo. Between the courses, there was administrative work to do – planning materials, interviewing candidates and companies, arranging venue and hotel logistics, etc. However, the “peak load time” during the course clearly outscaled the “low load time” in between courses. Hiring for the “peak load times” of the courses meant that there would be a lot of expensive “low load / idle time” between each peak. The situation is very similar to building capacity in fixed cost physical data centers compared to AWS variable-by-demand costs. To add to the complexity, finding and hiring people with these very specialised skills took a long time, so it was simply not practical to “hire by the hour/day” a la gig-economy. Smoothing out the peaks-and-troughs of human workload was essential for Pivigo’s growth and sustainability. If they could hold courses more frequently, they could hold smaller, more frequent, courses and reduce the “peak load” spike. Also, changing to a faster cadence of smaller spikes would make Pivigo operationally more sustainable and scalable.

3) Revenue: Relying on one big event each year gives a big spike of revenue, which the company then slowly spends out over the year – until the next big event. Each and every event has to be successful, in order for the company to survive the next year. This makes each event a high-risk event for the company. This financial unpredictability limits company long term planning and hiring. Changing to smaller, more frequent, courses make Pivigo’s financial revenue stream healthier, safer and more predictable.

4) Pipeline of applicants: Interested candidates and companies had a once-a-year chance to apply. If they missed the deadline or were turned away because the class was already full, they had to wait an entire year for the next course. Obviously, many did not wait – waiting a year is simply too long. Holding these courses more frequently make it more likely that candidates – and companies – might wait until the next course. Finding a way to increase the cadence of these courses would improve the pipeline for Pivigo.

If Pivigo could find a way to hold these courses more frequently, instead of just once-a-year, then they could accelerate growth of their company. To do this, they had to fix the bottleneck caused by the physical location.

Three weeks ago, Pivigo completed their first ever fully-distributed “virtual” course. It used no physical venue. And it was a resounding success. Just like the typical “in-person” events, teams formed and bonded, good work was done, and complex problems were solved. Pivigo staff, course participants and project sponsors were all happy. Just like usual.

This maps shows everyone’s physical location.
Map of locations

To make this first-ever fully-distributed “virtual” S2DS event successful, we focused on some ideas outlined in my previous presentations here, here and also in my book. Some things I specifically thought were worth highlighting:

1) Keep tools simple Helping people focus on the job-at-hand required removing unnecessary and complex tools. The simpler the tools better. We used zoom, slack and email. After all, people were here to work together on a real-world data science problem, not to learn how to use complex tools.

2) Very crisply organized human processes. None of these people were seasoned “remoties”, so this was all new to all of them. They first met as part of this course. They had to learn how to work together as a team, professionally and as social humans, at the same time as they worked on their project which had to be completed by a fixed deadline.

3) As this was Pivigo’s first time doing this, Kim made a smart decision to explicitly limit size, so there were only 15 people. This gave Kim, Jason and the rest of the staff extra time and space to carefully check in with each of the remote participants and gave everyone best chance of success. Future events will experiment with cohort sizes.

4) Each participant said that they only applied because they could attend “remotely” – even though *none* of them had prior experience working remotely like this. Pivigo were able to interview, and recruit participants who would normally not even apply for the London-based event. The most common reason I heard for not being able to travel to London was disruption to parents with new children – successful applicants worked from their homes on real-world problems, while still being able to take care of their family. The cost of travel to/from England, and the cost of living in London were also mentioned. The need and demand was clearly there. As was their willingness to try something they’d never done before.

5) I note the diversity impact of this new approach. This cohort had a ratio of 26% female / 74% male, while prior in-person S2DS classes typically had a ratio of 35% female / 65% male. This is only one data point, so we’ll watch this with the next S2DS event, and see if there is a trend.

The Virtual S2DS programme was a success. The project outcomes were of similar quality to the campus based events, the participants felt they got a great experience that will help their careers going forward, and, most importantly, the group bonded more strongly than was expected. In a post-event survey, the participants said they would reach out to each other in the future if they had a question or a problem that the network could help with. Interestingly, several of them also expressed an interest in continuing remote working, something they had not considered before.

For Kim and the Pivigo team, this newly-learned ability to hold fully distributed events is game-changing stuff. Physical space is no longer a limiting factor. Now, they can hold more frequent, smaller courses – smoothing down the peaks and troughs of “load”, while also improving the pipelines by making their schedule more timely for applicants. Pivigo are investigating if they could even arrange to run some of these courses concurrently, which would be even more exciting – stay tuned.

Congratulations to Kim and the rest of Pivigo staff. And a big thank you to Adrienne, Aldo, Christine, Prakash, Nina, Lauren, Gordon, Lee, Christien, Rogelio, Sergio, Tiziana, Fabio and Mark for quietly helping prove that this approach worked just fine.

John & Kim.
=====
ps: Pivigo are now accepting applications for their next “virtual” event and their next inperson event. If you are an M.Sc./Ph.D. graduate, with a good internet connection, and looking for your first real-world project, apply here: http://www.s2ds.org/. Companies looking for help with data science problems can get in touch with Kim and the rest of the Pivigo team at info@s2ds.org.

http://oduinn.com/blog/2015/12/03/increase-growth-and-revenue-by-becoming-distributed/


Air Mozilla: Optimizing for Uncertainty

Четверг, 03 Декабря 2015 г. 20:00 + в цитатник

Optimizing for Uncertainty The web is increasingly complex and dynamic. In the natural realm, 'complex adaptive systems' allow for flux and change in tumultuous environments. Our December speaker...

https://air.mozilla.org/optimizing-for-uncertainty/


Air Mozilla: Web QA Weekly Meeting, 03 Dec 2015

Четверг, 03 Декабря 2015 г. 20:00 + в цитатник

Web QA Weekly Meeting This is our weekly gathering of Mozilla'a Web QA team filled with discussion on our current and future projects, ideas, demos, and fun facts.

https://air.mozilla.org/web-qa-weekly-meeting-20151203/


Air Mozilla: Reps weekly, 03 Dec 2015

Четверг, 03 Декабря 2015 г. 19:00 + в цитатник

Reps weekly This is a weekly call with some of the Reps council members to discuss all matters Reps, share best practices and invite Reps to share...

https://air.mozilla.org/reps-weekly-20151203/


Mozilla Reps Community: Rep of the Month – November 2015

Четверг, 03 Декабря 2015 г. 15:00 + в цитатник

Please join us in congratulating Dorothee Danedjo Fouba as Rep of the Month for November!

Dorothee has shown amazing leadership in Cameroon – growing that community from zero to over fifty in just one year. By organizing of a series of events and empowering emerging leaders, Dorothee has shown great talent for bringing people together to learn and understand the potential of Mozilla to improve their world. As Tech Women alumni Dorothee also speaks to, and inspires other women technical leaders in their goals for building Mozilla communities across the world.

Don’t forget to congratulate her on Discourse!

https://blog.mozilla.org/mozillareps/2015/12/03/rep-of-the-month-november-2015/


Henrik Skupin: Results of the Firefox Automation Survey

Четверг, 03 Декабря 2015 г. 14:59 + в цитатник

November 23rd I blogged about the active survey covering the information flow inside our Firefox Automation team. This survey was open until November 30th and I thank everyone of the participants which have taken the time to get it filled out. In the following you can find the results:

Most of the contributors who are following our activities are with Mozilla for the last 3 years. Whereby half of them joined less than a year ago. There is also a 1:1 split between volunteers and paid staff members. This is most likely because of the low number of responses, but anyway increasing the number of volunteers is certainly something we want to follow-up on in the next months.

The question about which communication channel is preferred to get the latest news got answered with 78% for the automation mailing list. I feel that this is a strange result given that we haven’t really used that list for active discussions or similar in the past months. But that means we should put more focus on the list. Beside that also 55% listening our activities on Bugzilla via component watchers. I would assume that those people are mostly our paid staff who kinda have to follow each others work regarding reviews, needinfo requests, and process updates. 44% of all read our blog posts on the Mozilla A-Team Planet. So we will put more focus in the future to both blog posts and discussions on the mailing list.

More than half of our followers check for updates at least once a day. So when we get started with interesting discussions I would expect good activity throughout the day.

44% of all feel less informed about our current activities. Another 33% answered this question with ‘Mostly’. So it’s a clear indication what I already thought and which clearly needs action on our side to be more communicative. Doing this might also bring more people into our active projects, so mentoring would be much more valuable and time-effective as handling any drive-by projects which we cannot fully support.

A request for the type of news we should do more is definitely for latest changes and code landings from contributors. This will ensure people feel recognized and contributors will also know each others work, and see the effectiveness in regards of our project goals. But also discussions about various automation related topics (as mentioned already above) are highly wanted. Other topics like quarterly goals and current status updates are also wanted and we will see how we can do that. We might be able to fold those general updates into the Engineering Productivity updates which are pushed out twice a month via the A-Team Planet.

Also there is a bit of confusion about the Firefox Automation team and how it relates to the Engineering Productivity team (formerly A-Team). Effectively we are all part of the latter, and the “virtual” Automation team has only been created when we got shifted between the A-Team and QA-Team forth and back. This will not happen anymore, so we agreed on to get rid of this name.

All in all there are some topics which will need further discussions. I will follow-up with another blog post soon which will show off our plans for improvements and how we want to work to make it happen.

http://www.hskupin.info/2015/12/03/results-of-the-firefox-automation-survey/


Hal Wine: Tuning Legacy vcs-sync for 2x profit!

Четверг, 03 Декабря 2015 г. 11:00 + в цитатник

Tuning Legacy vcs-sync for 2x profit!

One of the challenges of maintaining a legacy system is deciding how much effort should be invested in improvements. Since modern vcs-sync is “right around the corner”, I have been avoiding looking at improvements to legacy (which is still the production version for all build farm use cases).

While adding another gaia branch, I noticed that the conversion path for active branches was both highly variable and frustratingly long. It usually took 40 minutes for a commit to an active branch to trigger a build farm build. And worse, that time could easily be 60 minutes if the stars didn’t align properly. (Actually, that’s the conversion time for git -> hg. There’s an additional 5-7 minutes, worst case, for b2g_bumper to generate the trigger.)

The full details are in bug 1226805, but a simple rearrangement of the jobs removed the 50% variability in the times and cut the average time by 50% as well. That’s a savings of 20-40 minutes per gaia push!

Moral: don’t take your eye off the legacy systems – there still can be some gold waiting to be found!

http://dtor.com/halfire/2015/12/03/tuning_vcs_sync_for_2x_profit_.html


Mitchell Baker: Thunderbird Update

Четверг, 03 Декабря 2015 г. 03:52 + в цитатник
This message is a summary and an update to a message about Thunderbird that I sent to Mozilla developers on Monday. Here are the key points. First, Thunderbird and Firefox are interconnected in a few different ways. They are connected through our technical infrastructure. Both use Mozilla build and release systems. This seems arcane but […]

http://blog.lizardwrangler.com/2015/12/03/thunderbird-update/


Air Mozilla: Firefox OS London Meetup - Firefox OS Add-Ons

Среда, 02 Декабря 2015 г. 23:51 + в цитатник

Firefox OS London Meetup - Firefox OS Add-Ons This is a session of the Firefox OS London meetup, dedicated to Firefox OS add-ons. You can find a quick recap of what's new in...

https://air.mozilla.org/firefox-os-london-meetup-firefox-os-add-ons/


Mozilla WebDev Community: Beer and Tell – November 2015

Среда, 02 Декабря 2015 г. 21:21 + в цитатник

Once a month, web developers from across the Mozilla Project get together to design programming languages that are intentionally difficult to reason about. While we advanced the state-of-the-art in side effects, we find time to talk about our side projects and drink, an occurrence we like to call “Beer and Tell”.

There’s a wiki page available with a list of the presenters, as well as links to their presentation materials. There’s also a recording available courtesy of Air Mozilla.

Peterbe: Headsupper.io

Peterbe started us off with headsupper.io, a service that sends notification emails when you commit to a GitHub project with a specific keyword in your commit message. The service is registered as a Github webhook, and you can configure the service to only send emails on new tags if you so desire.

Osmose: Advanced Open File (Round 2)

Next up was Osmose (that’s me!), with an Atom package for opening files called Advanced Open File. Advanced Open File adds a convenient modal dialog for finding files to open or create that aims to replace use of the system file dialog. Previously featured on Beer and Tell, today’s update included news of a rewrite in ES2015 using Babel, test coverage, Windows path fixes, and more!

Kumar: React + Redux Live Reload

Kumar shared a demo of an impressive React and Redux development setup that includes live-reloading of the app as the code changes, as well as a detailed view of the state changes happening in the app and the ability to walk through the history of state changes to debug your app. The tools even replay state changes after live-reloading for an impressively short feedback loop during development.

Bwalker: ebird-mybird

Bwalker was next with a site called ebird-mybird. eBird is a bird observation checklist that bird watchers can use to track their observations. ebird-mybird reads in a CSV file exported from eBird and displays the data in various useful forms on a static site, including aggregate sightings by year/month and sightings categorized by species, location, and date.

The site itself is a frontend app that uses C3 for generating charts, PapaParse for parsing the CSV files, and Handlebars for templating.

Potch: Pseudorandom Number Generator

Last up was Potch with a small experiment in generating pseudorandom numbers in JavaScript. Inspired by a blog post about issues with Math.random in V8, Potch create a very simple Codepen that draws on a canvas based on custom-generated random numbers.

If you need sound random number generation, the blog post recommends crypto.randomBytes, also included in the Node standard library.


This week’s result was a programming language composed entirely of pop culture references, including a time-sensitive compiler that assigns optimization levels based on how current your references are.

If you’re interested in attending the next Beer and Tell, sign up for the dev-webdev@lists.mozilla.org mailing list. An email is sent out a week beforehand with connection details. You could even add yourself to the wiki and show off your side-project!

See you next month!

https://blog.mozilla.org/webdev/2015/12/02/beer-and-tell-november-2015/


Christian Legnitto: Star rating is the worst metric I have ever seen

Среда, 02 Декабря 2015 г. 21:14 + в цитатник

Note: The below is a slightly modified version of a rant I posted internally at Facebook when I was shipping their mobile apps. Even though the post is years old, I think the issues with star rating still apply in general. These days I mainly rant on Twitter.

Not only is star rating the worst metric I have ever seen at an engineering company, I think it is actively encouraging us to make wrong and irrational decisions.

My criticisms, in no particular order:

1. We can game it easily.

On iOS we prompt1 people to rate our app and get at least a 1/2 a star bump. Is that a valid thing to do or are we juicing the stats? We don’t really know. On Android we don’t prompt…should we artificially add in a 1/2 a star there to make up for the lack of prompt and approximate the “real” rating? 2

We’re adding in-app rating dialogs to both platforms, which can juice the stats even more3. If we are able to add a simple feature–which I think we should add for what it’s worth–and wildly swing a core metric without actually changing the app itself, I would argue the core metric is not reflective of the state of the app.

2. We don’t understand it.

The star rating is up on Android…we don’t really know why. The star rating is down on iOS and we think we might know why, but we still have big countdown buckets like “performance”. For a concrete example, in the Facebook for Android release before Home we shipped the crashiest release ever…and the star rating was up! We think it was because we added a much-requested feature and people didn’t care about the crashes but we have no way to be sure.

When users give star ratings they are not required to enter text reviews, leaving us blind and with no actionable information for those ratings. So even when we cluster on text reviews (using awesome systems and legit legwork by the data folks) we are working with even fewer data points to try to understand what is happening.4

Finally, we have fixed countdown bugs on both platforms in the last quarter…we haven’t seen a step function up or down on either star rating….the trends are pretty constant. This implies that we don’t really know what levers to pull and what they get us.

3. Vocal minorities skewing risk vs reward reasoning.

The absolute number of star ratings is pretty low, so vocal minorities can swing it wildly–representative sample this is not. For example, on the latest iOS app we think 37% of 1-star reviews can be attributed to a crash on start. Based on what we know, the upper bound of affected users is likely ~1MM, which at 130MM MAU5 that’s 0.7%. The fix touches a critical component of the app and mucks around with threading (via blocks) and the master code is completly different. So 0.7% of users make up 37% of our 1 star reviews because of one bug (we think) and we are pushing out a hotfix touching the startup path because of the “37%” when we should really be focusing on the “0.7%”. I think that is the right decision if we put a lot of weight on star rating but it isn’t the right decision generally. Note that we did not push out a hotfix for the profile picture uploading failure issue in the same release because the 0.5% of DAU affected wasn’t seen as worth the risk and churn.

4. It’s fluid-ish.

A user can give us a star rating and then go back and change it. Often they do, but frequently they don’t (we think). This means our overall star ratings likely have an inertia coefficient and may not reflect the current state of the app. We have no visibility into how much this affects ratings and in what ways. If we fix the iOS crash mentioned above, what percent of users will go back and change their star rating from 1 to something else? As far as I know this inertia coefficient isn’t included in any analysis and isn’t really accounted for in our reasoning and goals.6

5. One star != bad experience.

Note: I added #5 today, it wasn’t in the original post.

Digging into our star rating, some curious behavior emerged:

  • The app stores show reviews on the app listing page. The algorithm that chooses which reviews to show must have some balance component as it usually shows at least one negative and positive review. We found that users in certain countries noticed this and would rate us as 1 star just to see their name on the listing page!
  • There were a number of 1 star ratings with very positive reviews attached. It turns out that in some cultures 1 star is the best (“we’re number one”) so those users were trying to give us the best rating and instead gave us the worst!

Of course, there is both the standard OMG CHANGE reaction (“Why am I being forced to install Messenger?”) and user support issues (“I am blocked from sending friend requests, please help me!”) that show up frequently in 1 star reviews too. While both of those are important to capture and measure, they don’t really reflect on the quality of the app or a particular release.

The emperor has no clothes.

Everyone working on mobile knows about these issues and has been going along with star rating due to the idea that a flawed metric is better than no metric. I don’t think even using star rating as a knowingly flawed metric is useful from what I’ve seen over the last quarter. I think we should keep an eye on it as a vanity metric. I think we should work to capture that feedback in-app so we can be in control and get actionable data. I think we should be aware of it as an input to our reasoning about hotfixes but make it clear the star rating itself has no value and shouldn’t be optimized for in a specific release cycle.


  1. Via Appirater at the time.

http://christian.legnitto.com/blog/2015/12/02/star-rating-is-the-worst-metric-i-have-ever-seen/


Air Mozilla: The Joy of Coding - Episode 37

Среда, 02 Декабря 2015 г. 21:00 + в цитатник

The Joy of Coding - Episode 37 mconley livehacks on real Firefox bugs while thinking aloud.

https://air.mozilla.org/the-joy-of-coding-episode-37/


Daniel Pocock: Is giving money to Conservancy the best course of action?

Среда, 02 Декабря 2015 г. 18:40 + в цитатник

There has been a lot of discussion lately about Software Freedom Conservancy's fundraiser.

Various questions come to my mind:

Is this the only way to achieve goals such as defending copyright? (There are other options, like corporate legal insurance policies)

When all the options are compared, is Conservancy the best one? Maybe it is, but it would be great to confirm why we reached that conclusion.

Could it be necessary to choose two or more options that complement each other? Conservancy may just be one part of the solution and we may get a far better outcome if money is divided between Conservancy and insurance and something else.

What about all the other expenses that developers incur while producing free software? Many other professionals, like doctors, do work that is just as valuable for society but they are not made to feel guilty about asking for payment and reimbursement. (In fact, for doctors, there is no shortage of it from the drug companies).

There seems to be an awkwardness about dealing with money in the free software world and it means many projects continue to go from one crisis to the next. Just yesterday on another mailing list there was discussion about speakers regularly asking for reimbursement to attend conferences and at least one strongly worded email appeared questioning whether people asking about money are sufficiently enthusiastic about free software or if they are only offering to speak in the hope their trip will be paid.

The DebConf team experienced one of the more disappointing examples of a budget communication issue when developers who had already volunteered long hours to prepare for the event then had to give up valuable time during the conference to wash the dishes for 300 people. Had the team simply insisted that the high cost of local labor was known when the country was selected then the task could have been easily outsourced to local staff. This came about because some members of the community felt nervous about asking for budget and other people couldn't commit to spend.

Rather than stomping on developers who ask about money or anticipate the need for it in advance, I believe we need to ask people if money was not taboo, what is the effort they could contribute to the free software world and how much would they need to spend in a year for all the expenses that involved. After all, isn't that similar to the appeal from Conservancy's directors? If all developers and contributors were suitably funded, then many people would budget for contributions to Conservancy, other insurances, attending more events and a range of other expenses that would make the free software world operate more smoothly.

In contrast, the situation we have now (for event-related expenses) is that developers funding themselves or with tightly constrained budgets or grants often have to spend hours picking through AirBNB and airline web sites trying to get the best deal while those few developers who do have more flexible corporate charge cards just pick a convenient hotel and don't lose any time reading through the fine print to see if there are charges for wifi, breakfast, parking, hidden taxes and all the other gotchas because all of that will be covered for them.

With developer budgets/wishlists documented, where will the money come from? Maybe it won't appear, maybe it will. But if we don't ask for it at all, we are much less likely to get anything. Mozilla has recently suggested that developers need more cash and offered to put $1 million on the table to fix the problem, is it possible other companies may see the benefit of this and put up some cash too?

The time it takes to promote one large budget and gather donations is probably far more efficient than the energy lost firefighting lots of little crisis situations.

Being more confident about money can also do a lot more to help engage people and make their participation sustainable in the long term. For example, if a younger developer is trying to save the equivalent of two years of their salary to paying a deposit on a house purchase, how will they feel about giving money to Conservancy or pay their own travel expenses to a free software event? Are their families and other people they respect telling them to spend or to save and if our message is not compatible with that, is it harder for us to connect with these people?

One other thing to keep in mind is that budgeting needs to include the costs of those who may help the fund-raising and administration of money. If existing members of our projects are not excited about doing such work we have to be willing to break from the "wait for a volunteer or do-it-yourself" attitude. There are so many chores that we are far more capable of doing as developers that we still don't have time for, we are only fooling ourselves if we anticipate that effective fund-raising will take place without some incentives going back to those who do the work.

http://danielpocock.com/is-giving-money-to-conservancy-the-best-course-of-action


QMO: Firefox 43 Beta 7 Testday Results

Среда, 02 Декабря 2015 г. 17:12 + в цитатник

Hi mozillians! \o/

The last Friday, November 27th, we held Firefox 43.0 Beta 7 Testday and it was another successful event!  

First, many thanks go out to Moin Shaikh, Amlan Biswas, Iryna Thompson and Bangladesh Community: Hossain Al IkramNazir Ahmed SabbirT.M. Sazzad Hossain, Khalid Syfullah Zaman, Raihan Ali, Rezaul Huque Nayeem, Kazi Nuzhat Tasnem, Nazmus Shakib Robin, Sajedul Islam, Amlan Biswas, Tahsan Chowdhury Akash, Forhad Hossain, Sayed Mohammad Amir, Tanjil Haque, Saheda Reza Antora, Towkir Ahmed, Mohammed Jawad Ibne Ishaque, Fazle Rabbi, Jahir Islam, Umar Nasib, Mohammad Maruf Islam, Md. Faysal Alam Riyad, Ashickur Rahman, Md. Ehsanul Hassan, Md. Rahimul Islam and Rakibul Islam Ratul for getting involved – your help is always greatly appreciated!

Secondly, a big thank you to all our active moderators

https://quality.mozilla.org/2015/12/firefox-43-beta-7-testday-results/


Tarek Ziad'e: Managing small teams

Среда, 02 Декабря 2015 г. 13:00 + в цитатник

In the past three years, I went from being a developer in a team, to a team lead, to a engineer manager. I find my new position is very challenging because of the size of my team, and the remote aspects (we're all remotes.)

When you manage 4/5 people, you're in that weird spot where you're not going to spend 100% of your time doing manager stuff. So for the remaining time, the obvious thing to do is to help out your team by putting back your developer hat.

But switching hats like this has a huge pitfall: you are the person giving people some work to do depending on the organization priorities and you are also helping out developing. That puts you in a position where it's easy to fall into micromanagement: you are asking someone or a group of person to be accountable for a task and you are placing yourself on both sides.

I don't have any magic bullet to fix this, besides managing a bigger team where I'd spent 100% of my time on management. And I don't know if/when this will happen because teams sizes depends on the organization priorities and on my growth as a manager.

So for now, I am trying to set a few rules for myself:

  1. when there's a development task, always delegate it to someone it the team and propose your help as a reviewer. Do not lead any development task, but try to have an impact on how things move forward, so they go into the direction you'd like them to go as a manager.
  2. Every technical help you are doing for your team should be done by working under the supervision of a member of your team. You are not a developer among other developers in your own team.
  3. If you lead a task, it should be an isolated work that does not direcly impact developers in the team. Like building a prototype etc.
  4. Never ever participate in team meetings with a developer hat on. You can give some feedback of course, but as a manager. If there are some technical points where you can help, you should tackle them through 1:1s. See #1

There. That's what I am trying to stick with going forward. If you have more tips I'll take them :)

I see this challenge as an interesting puzzle to solve, and a key for me to maximize my team's impact.

Coding was easier, damned...

http://blog.ziade.org/2015/12/02/managing-small-teams/


Daniel Stenberg: What’s new in curl

Среда, 02 Декабря 2015 г. 10:40 + в цитатник

CURL keyboardWe just shipped our 150th public release of curl. On December 2, 2015.

curl 7.46.0

One hundred and fifty public releases done during almost 18 years makes a little more than 8 releases per year on average. In mid November 2015 we also surpassed 20,000 commits in the git source code repository.

With the constant and never-ending release train concept of just another release every 8 weeks that we’re using, no release is ever the grand big next release with lots of bells and whistles. Instead we just add a bunch of things, fix a bunch of bugs, release and then loop. With no fanfare and without any press-stopping marketing events.

So, instead of just looking at what was made in this last release, because you can check that out yourself in our changelog, I wanted to take a look at the last two years and have a moment to show you want we have done in this period. curl and libcurl are the sort of tool and library that people use for a long time and a large number of users have versions installed that are far older than two years and hey, now I’d like to tease you and tell you what can be yours if you take the step straight into the modern day curl or libcurl.

Thanks

Before we dive into the real contents, let’s not fool ourselves and think that we managed these years and all these changes without the tireless efforts and contributions from hundreds of awesome hackers. Thank you everyone! I keep calling myself lead developer of curl but it truly would not not exist without all the help I get.

We keep getting a steady stream of new contributors and quality patches. Our problem is rather to review and receive the contributions in a timely manner. In a personal view, I would also like to just add that during these two last years I’ve had support from my awesome employer Mozilla that allows me to spend a part of my work hours on curl.

What happened the last 2 years in curl?

We released curl and libcurl 7.34.0 on December 17th 2013 (12 releases ago). What  did we do since then that could be worth mentioning? Well, a lot, and then I’m going to mostly skip the almost 900 bug fixes we did in this time.

Many security fixes

Almost half (18 out of 37) of the security vulnerabilities reported for our project were reported during the last two years. It may suggest a greater focus and more attention put on those details by users and developers. Security reports are a good thing, it means that we address and find problems. Yes it unfortunately also shows that we introduce security issues at times, but I consider that secondary, even if we of course also work on ways to make sure we’ll do this less in the future.

URL specific options: –next

A pretty major feature that was added to the command line tool without much bang or whistles. You can now add –next as a separator on the command line to “group” options for specific URLs. This allows you to run multiple different requests on URLs that still can re-use the same connection and so on. It opens up for lots of more fun and creative uses of curl and has in fact been requested on and off for the project’s entire life time!

HTTP/2

There’s a new protocol version in town and during the last two years it was finalized and its RFC went public. curl and libcurl supports HTTP/2, although you need to explicitly ask for it to be used still.

HTTP/2 is binary, multiplexed, uses compressed headers and offers server push. Since the command line tool is still serially sending and receiving data, the multiplexing and server push features can right now only get fully utilized by applications that use libcurl directly.

HTTP/2 in curl is powered by the nghttp2 library and it requires a fairly new TLS library that supports the ALPN extension to be fully usable for HTTPS. Since the browsers only support HTTP/2 over HTTPS, most HTTP/2 in the wild so far is done over HTTPS.

We’ve gradually implemented and provided more and more HTTP/2 features.

Separate proxy headers

For a very long time, there was no way to tell curl which custom headers to use when talking to a proxy and which to use when talking to the server. You’d just add a custom header to the request. This was never good and we eventually made it possible to specify them separately and then after the security alert on the same thing, we made it the default behavior.

Option man pages

We’ve had two user surveys as we now try to make it an annual spring tradition for the project. To learn what people use, what people think, what people miss etc. Both surveys have told us users think our documentation needs improvement and there has since been an extra push towards improving the documentation to make it more accessible and more readable.

One way to do that, has been to introduce separate, stand-alone, versions of man pages for each and very libcurl option. For the functions curl_easy_setopt, curl_multi_setopt and curl_easy_getinfo. Right now, that means 278 new man pages that are easier to link directly to, easier to search for with Google etc and they are now written with more text and more details for each individual option. In total, we now host and maintain 351 individual man pages.

The boringssl / libressl saga

The Heartbleed incident of April 2014 was a direct reason for libressl being created as a new fork of OpenSSL and I believe it also helped BoringSSL to find even more motivation for its existence.

Subsequently, libcurl can be built to use either one of these three forks based on the same origin.  This is however not accomplished without some amount of agony.

SSLv3 is also disabled by default

The continued number of problems detected in SSLv3 finally made it too get disabled by default in curl (together with SSLv2 which has been disabled by default for a while already). Now users need to explicitly ask for it in case they need it, and in some cases the TLS libraries do not even support them anymore. You may need to build your own binary to get the support back.

Everyone should move up to TLS 1.2 as soon as possible. HTTP/2 also requires TLS 1.2 or later when used over HTTPS.

support for the SMB/CIFS protocol

For the first time in many years we’ve introduced support for a new protocol, using the SMB:// and SMBS:// schemes. Maybe not the most requested feature out there, but it is another network protocol for transfers…

code of conduct

Triggered by several bad examples in other projects, we merged a code of conduct document into our source tree without much of a discussion, because this is the way this project always worked. This just makes it clear to newbies and outsiders in case there would ever be any doubt. Plus it offers a clear text saying what’s acceptable or not in case we’d ever come to a point where that’s needed. We’ve never needed it so far in the project’s very long history.

–data-raw

Just a tiny change but more a symbol of the many small changes and advances we continue doing. The –data option that is used to specify what to POST to a server can take a leading ‘@’ symbol and then a file name, but that also makes it tricky to actually send a literal ‘@’ plus it makes scripts etc forced to make sure it doesn’t slip in one etc.

–data-raw was introduced to only accept a string to send, without any ability to read from a file and not using ‘@’ for anything. If you include a ‘@’ in that string, it will be sent verbatim.

attempting VTLS as a lib

We support eleven different TLS libraries in the curl project – that is probably more than all other transfer libraries in existence do. The way we do this is by providing an internal API for TLS backends, and we call that ‘vtls’.

In 2015 we started made an effort in trying to make that into its own sub project to allow other open source projects and tools to use it. We managed to find a few hackers from the wget project also interested and to participate. Unfortunately I didn’t feel I could put enough effort or time into it to drive it forward much and while there was some initial work done by others it soon was obvious it wouldn’t go anywhere and we pulled the plug.

The internal vtls glue remains fine though!

pull-requests on github

Not really a change in the code itself but still a change within the project. In March 2015 we changed our policy regarding pull-requests done on github. The effect has been a huge increase in number of pull-requests and a slight shift in activity away from the mailing list over to github more. I think it has made it easier for casual contributors to send enhancements to the project but I don’t have any hard facts backing this up (and I wouldn’t know how to measure this).

… as mentioned in the beginning, there have also been hundreds of smaller changes and bug fixes. What fun will you help us make reality in the next two years?

http://daniel.haxx.se/blog/2015/12/02/whats-new-in-curl/


The Mozilla Blog: Visualizing the Invisible

Среда, 02 Декабря 2015 г. 09:58 + в цитатник

Today, online privacy and threats like invisible tracking from third parties on the Web seem very abstract. Many of us are either not aware of what’s happening with our online data or we feel powerless because we don’t know what to do. More and more, the Internet is becoming a giant glass house where your personal information is exposed to third parties who collect and use it for their own purposes.

We recently released Private Browsing with Tracking Protection in Firefox – a feature focused on providing anyone using Firefox with meaningful choice over third parties on the Web that might be collecting data without their understanding or control. This is a feature which addresses the need for more control over privacy online but is also connected to an ongoing and important debate around the preservation of a healthy, open Web ecosystem and the problems and possible solutions to the content blocking question.

The Glass House

Earlier this month we dedicated a three-day event to the topic of online privacy in Hamburg, Germany. Today, we would like to share some impressions from the event and also an experiment we filmed on the city’s famous Reeperbahn.

Our experiment?

We set out to see if we could explain something that is not easily visible, online privacy, in a very tangible way. We built an apartment fully equipped with everything one needs to enjoy a short trip to Germany’s northern pearl. We made the apartment available to various travelers arriving to stay the night. Once they logged onto the apartment’s Wi-Fi, all the walls were removed, revealing the travelers to onlookers and external commotion caused when their private information turned out to be public.

The travelers’ responses are genuine.

That said, we did bring in a few actors for dramatic effect to help highlight a not-so-subtle reference to what can happen to your data when you aren’t paying attention. Welcome to the glass house.

While the results of the experiment are intended to educate and generate awareness, we also captured the participants’ thoughts and feelings after the reveal. Here are some of most poignant reactions:

Discussing the State of Data Control on the Web Today

Over the next two days, in that same glass house, German technology and privacy experts, Hamburg’s Digital Media Women group, the Mozilla community and people interested in the topic of online privacy came together to discuss the State of Data and Control on the Web.

We kicked-off with a panel discussion. Moderated by Svenja Teichmann, founder and Managing Director of crowdmedia, German data protection experts spoke about various aspects of online privacy protection and questions like “What is private nowadays?” while passersby could look over their shoulders through the glass walls.

Glass House: Panel DIscussion on Online PrivacyFrom left to right: Lars Reppesgaard (Author, “The Google Empire”), Svenja Teichmann (crowdmedia), Frederick Richter (Chairman German Data Protection Foundation) and Winston Bowden (Sr. Manager Firefox Product Marketing)

Frederick Richter pointed to the user’s uncertainty: “On the Web we are not aware of who is watching us. And many people can’t protect their privacy online, because they don’t have easy features to use.” Lars Reppesgaard is not fundamentally against tracking but thinks users must have a choice: “If you want the technology to help you, it has to collect data sometimes. But for most users it’s not obvious when and by whom they are tracked.” When it came to the new Tracking Protection feature in Private Browsing on Firefox, Winston Bowden emphasized: “We are not an enemy of online advertising. It’s a legitimate source of income and guarantees highly exciting content on the Web. But tracking users without them knowing or tracking them even if they actively decided against it, won’t work. The open and free Web is a valuable asset, which we should protect. Users have to be in control of their data.”

Educating and Engaging

Finally, German Mozilla community members joined the event to inform and educate people about how Firefox can help users gain control over their online experience. They explained the background and genesis of Tracking Protection but also showed tools such as Lightbeam and talked about Smart On Privacy and Web Literacy programs to offer better insight into how the Web works.

Glass House: Community Engagement Thanks to all who worked behind the scenes and/or came to Hamburg and made this event possible. We appreciate your help educating and advocating for people about their choice and control over online privacy.

https://blog.mozilla.org/blog/2015/12/01/visualizing-the-invisible/


Mozilla Addons Blog: December 2015 Featured Add-ons

Среда, 02 Декабря 2015 г. 03:47 + в цитатник

Pick of the Month: Fox Web Security

by Oleksandr
Fox Web Security is designed to automatically block known dangerous websites and unwanted content that is not suitable for children.

“This add-on is extremely fast and effective! You can say goodbye to porno sites, scams and viruses—now my web is absolutely safe.”

Featured: YouTube™ Flash-HTML5

by A Ulmer
YouTube™ Flash-HTML allows you to play YouTube Videos in Flash or HTML5 player.

Featured: AdBlock for YouTube™

by AdblockLite
AdBlock for YouTube™ removes all ads from YouTube.

Featured: 1-Click YouTube Video Download

by The 1-Click YouTube Video Download Team
The simplest YouTube Video Downloader for all YouTube Flash sites, period.

Nominate your favorite add-ons

Featured add-ons are selected by a community board made up of add-on developers, users, and fans. Board members change every six months, so there’s always an opportunity to participate. Stayed tuned to this blog for the next call for applications.

If you’d like to nominate an add-on for featuring, please send it to amo-featured@mozilla.org for the board’s consideration. We welcome you to submit your own add-on!

https://blog.mozilla.org/addons/2015/12/01/december-2015-featured-add-ons/



Поиск сообщений в rss_planet_mozilla
Страницы: 472 ... 221 220 [219] 218 217 ..
.. 1 Календарь