-Поиск по дневнику

Поиск сообщений в rss_planet_mozilla

 -Подписка по e-mail

 

 -Постоянные читатели

 -Статистика

Статистика LiveInternet.ru: показано количество хитов и посетителей
Создан: 19.06.2007
Записей:
Комментариев:
Написано: 7

Planet Mozilla





Planet Mozilla - https://planet.mozilla.org/


Добавить любой RSS - источник (включая журнал LiveJournal) в свою ленту друзей вы можете на странице синдикации.

Исходная информация - http://planet.mozilla.org/.
Данный дневник сформирован из открытого RSS-источника по адресу http://planet.mozilla.org/rss20.xml, и дополняется в соответствии с дополнением данного источника. Он может не соответствовать содержимому оригинальной страницы. Трансляция создана автоматически по запросу читателей этой RSS ленты.
По всем вопросам о работе данного сервиса обращаться со страницы контактной информации.

[Обновить трансляцию]

Mozilla Performance Blog: Our Year in Review: How we’ve made Firefox Faster in 2020

Вторник, 15 Декабря 2020 г. 16:45 + в цитатник

This year was unlike anything any of us has experienced before. The global pandemic affected how we learn, connect with others, work, play, shop, and more. It affected directly or indirectly every aspect of how we live. Consequently, a universally open and accessible Internet became more important to our daily lives. For some Firefox users, access to the Internet is not just important, it is essential.

As the Firefox team adjusted to focus on features and improvements to help our users live a more Internet-based life, the performance program redoubled our efforts to deliver on the vision of “performance for every experience.” We hoped that by speeding up key parts of Firefox and Gecko like page load, JavaScript responsiveness, and startup, and by ensuring new features performed their best, that we could make the new normal a little less painful and a little more… well, normal.

With that in mind, we’d like to share the work that we’re most proud of this year and highlight how we make each version of Firefox faster than the last. Since we’ve accomplished a lot this year, we’ve broken the accomplishments into a few sections related to how we think about our work:

  • “Performance for every experience” details all the changes to Firefox itself.
  • “Culture of Performance” includes all the work to develop tests, metrics, processes, and the other pieces to ensure that performance is central to everything that Firefox builds.
  • Finally, “Browsing at the Speed of Right” covers the work to improve performance for the Web beyond the borders of the Mozilla ecosystem.

Performance for Every Experience

We believe that interactions with the browser and web pages should be quick and smooth. Pages load quickly. The web is responsive. And resources are used thoughtfully. This is the baseline performance we expect for all Mozilla products. In 2020, we improved all of these areas.

Page load

  • Firefox 75 included support for the lazy image loading standard. This feature can improve page load performance and reduce data by loading images when needed rather than on the critical path for the initial page load.
  • Firefox 80 added an optimization for Flexbox reflows. This improves page load performance by more than 20% in some cases and reduces layout shift during page load.
  • Firefox 82 features speculative JavaScript parsing that allows the browser to parse external JavaScript scripts as soon as they are fetched, even if never executed. This change generally improves page load by 3-4%, but for some sites can improve page load by more than 67%.
  • Firefox 83 introduces the new Warp feature in the SpiderMonkey JavaScript engine.
  • Firefox 84 adds support for link preloading, which gives developers more control over resource loading and can improve page load by more than 10 percent.
  • The new Firefox for Android that launched in August loads pages 20 percent faster than at the start of the year making it the fastest Firefox browser ever on Android.

Responsiveness

  • Firefox 72 included a change to how Firefox manages the addon blocklist which resulted in up to a 7 percent improvement to startup and session restore times.
  • Completed the first two of three planned phases for the Fast Shutdown project resulting in Firefox shutting down three times more quickly.
  • Firefox 85 Nightly for Windows includes a new skeleton UI for startup that means the first sign that Firefox is starting occurs in 10 percent of the time as before. This improvement to perceived performance of Firefox is especially helpful for users on slower machines. We hope to ship this new experience by default in early 2021.
  • Implemented a cache for the about:home page at startup that improved the time to when Top Sites render during startup by just over 20%.
  • Lazily loading parts of the Firefox UI when they’re needed, instead of automatically, results in browser windows opening more quickly. Between Firefox 77 and Firefox 80, the time to open new windows once Firefox decreased by about 10% on Windows.
  • Firefox 81 includes better playback quality for 60fps VP9 videos on macOS when WebRender is enabled.
  • WebRender also includes optimizations for shaders compilation that improved startup and session restore time by more than 10 percent.
  • The new Firefox for Android that launched in August opens links from other apps 25 percent faster. On some devices this means it is 60 percent faster than the old Firefox for Android.
  • General startup didn’t improve much when compared to the MVP that launched in June 2019. While holding ground isn’t normally a reason to celebrate, being able to do so while features were added to develop the browser from a MVP to the flagship Firefox browser on Android is a considerable accomplishment.
  • Firefox for Android also shipped with a number of improvements to the responsiveness of key parts of the user interface including accuracy and control for touch scrolling, reduced panning jitter, how quickly the keyboard is shown, text appears when typing, and menus respond to user input.

Resource Usage

  • Firefox 83 included WebRender support of macOS and further improvements to power use, especially when scrolling or watching videos. This work builds off improvements that were released in late 2019 and reduced Firefox power use on macOS by 300 percent.

Culture of Performance

We believe everyone contributing to Mozilla products owns a share of performance. In order for everyone to share ownership, the Performance Program must build a foundation of validated tests, tools, metrics, and other resources (e.g., dashboards, documentation) that empower everyone to understand and impact the performance of the browser. In 2020, we improved our test infrastructure and enhanced our performance tools to measure improvements and diagnose performance regressions.

Maintain Table Stakes Performance

  • Improved the performance dashboard https://arewefastyet.com by adding categories for memory, network, both cold and warm page loads, and live page loads. Also added results for WebRender and Fission variants.
  • Introduced and refined sheriffing efficiency metrics for measuring the time taken for sheriffs to triage and raise regression bugs. These metrics were the focus of the performance sheriffing newsletter in September.
  • Introduced dynamic test documentation with PerfDocs.
  • Introduced mach perftest notebook for analysing and sharing performance test results.
  • Added support for running performance tests in a batch mode over several builds in a single CI job, resulting in a more efficient use of machine time without compromising coverage.
  • Several improvements to Perfherder to include support for ingesting results from tests running in batch-mode, the ability to sort results in the compare view, and improved bug templates.

Measure What Matters

  • Introduced visual metrics for page load tests in the continuous integration (CI) pipelines for Android, Linux, and macOS, with Windows expected by the end of the year. This effort replaces our proprietary page load engine with the popular open source browsertime tool.
  • Completed the first version of a new visual completeness-based test for Firefox startup to provide a more user representative view of how quickly Firefox starts.
  • Introduced mach perftest, a modern framework for performance testing with the goal of simplifying running and writing tests. All new tests will use this framework, and the existing tests will be migrated to it over time. Several teams have already taken advantage of this new test framework, including QUIC and Fenix.
  • Enhanced mach perftest to support network throttling, and used this for our first user journey tests to compare http/2 and http/3.
  • Combined cold and warm page loads into a single CI job, reducing the total execution time whilst increasing the test coverage.

Help Feature Teams

  • Replaced the Gecko Profiler add-on with Firefox’s new Profiler toolbar, unifying recording flows for both desktop and remote mobile profiling  (and eventually for both Firefox engineers and web developers).
  • Setting up the Profiler got easier as well. As soon as the Profiler icon is in the toolbar, the start/capture shortcuts will just work. Enabling it via “Tools -> Web Developer – Enable Profiler Menu Button” is no longer needed.
  • The Profiler’s sidebar is handy to summarize the currently selected stack/leaf. We saw it delighted our users, so we made it more prominent by switching it default-on in the Call Tree.
  • Right-click on markers in the Profiler timeline now provides the same context menu as the marker table and graph; letting you select to/from timings and copy data.
  • The media team added the first custom preset to the Profiler, giving contributors quick access to recording media performance with all the necessary threads and settings.
  • In case of failed symbolication, the Profiler’s metadata dropdown now provides a handy button to re-trigger the process.
  • Dragging profile files to load them, which previously only worked on the homepage, now also works on a loaded profile
  • Profile storage backend migrated to an official Mozilla-owned storage space
  • The Firefox Profiler’s support for showing FileIO markers for main thread I/O has been extended to also show markers for off main thread I/O, and even for threads that are not being sampled.
  • The Firefox Profiler, used successfully by Firefox engineers to improve Firefox performance, has been made available for web developers. It replaces the legacy “Performance” devtools panel. Using the Firefox Profiler from Firefox devtools will show profiles with a new view focused on the current tab.
  • Markers 2.0 – Self-service profiler markers: in Gecko you tell how you want the marker to look, and it will appear in the Firefox Profiler front-end where you want it to, without needing to patch the front-end.
  • Added the ability to delete profiles to provide users with agency over stored profiles.

Learn From Our Users

  • Created a new dashboard to more easily identify performance-related user feedback from sources like Google Play Store reviews and support.mozilla.org forums.
  • Developed a proof of concept technique for observational analysis that correlates changes to performance metrics with user behavior metrics.
  • Ran experiments on desktop and mobile to inform ship decisions and guide performance optimizations related to startup, page load, link preloading, and Warp.
  • Started an audit and gap analysis of performance telemetry as part of a project to improve real user metrics.
  • Added new telemetry to better understand performance and user impact related to slow startups, long-running scripts and resulting browser hangs, mobile performance, page load performance when using link preloading, garbage collection and JavaScript execution time, and more fine-grained page load metrics.
  • Added annotations to the Background Hang Reporter to better understand hangs occurring in the Firefox UI and how they impact users.

Browsing at the Speed of Right

This means building the right performance for the web. There is no trade-off between performance (the “easy” choice) and purpose (the “right” choice). Mozilla is uniquely positioned to bring a balance between what is good for the users of the web and what is good for the builders of the web. This includes influencing web standards and finding ways to make the web faster for everyone (regardless of location, income, or even browser). Members of the performance program also actively participate in the W3C Web Performance Working Group.

This year, contributed to the discussions and feedback for Google’s Web Vitals proposal. This included in-house analysis of the Largest Contentful Paint (LCP) metric and how it compares to existing performance metrics—both visual metrics and standardized web performance APIs—that Mozilla uses internally to test performance on desktop and mobile. Our analysis found that while LCP is a promising improvement over some previous page load metrics, when the render time is not available, the value of the metric is significantly reduced. This work lays the groundwork for more complete implementations of Web Vitals in 2021, and closes the gap between measurement and performance analysis capabilities on desktop and mobile.

In 2020 we also made progress on the Firefox implementation of the PerformanceEventTiming API.

What’s Next

The end of 2020 isn’t the end of performance work for Firefox, but rather a milestone marking our progress. We still have a lot more that we want to accomplish.

For the next few months our focus will remain on improving the responsiveness of key areas of the browser like long-running scripts that cause hangs, intra-site navigation, and startup. We’re also laying the foundation for work later in 2021 related to how Firefox uses resources like CPU, battery, and network bandwidth. Work on improving page load will continue with some focus on performance cliffs that result in egregiously poor performance. Likewise, our work to develop the culture of performance across the Firefox org and influence performance of the broader web will continue as well.

We hope that our work in 2020 made the new normal a little less painful for you. Our goals for 2021 are ambitious, so we invite you to join us as we work to make normal, whatever it might look like, less and less painful. At least when it comes to browsing the web.


Have a question about performance? Find us on Matrix in the #perf room

Found a performance issue on Firefox? We’d love a bug report

https://blog.mozilla.org/performance/2020/12/15/2020-year-in-review/


The Mozilla Blog: Mozilla’s Vision for Trustworthy AI

Вторник, 15 Декабря 2020 г. 14:25 + в цитатник

Mozilla is publishing its white paper, “Creating Trustworthy AI.”


A little over two years ago, Mozilla started an ambitious project: deciding where we should focus our efforts to grow the movement of people committed to building a healthier digital world. We landed on the idea of trustworthy AI.

When Mozilla started in 1998, the growth of the web was defining where computing was going. So Mozilla focused on web standards and building a browser. Today, computing — and the digital society that we all live in — is defined by vast troves of data, sophisticated algorithms and omnipresent sensors and devices. This is the era of AI. Asking questions today such as ‘Does the way this technology works promote human agency?’ or ‘Am I in control of what happens with my data?’ is like asking ‘How do we keep the web open and free?’ 20 years ago.

This current era of computing — and the way it shapes the consumer internet technology that more than 4 billion of us use everyday — has high stakes. AI increasingly powers smartphones, social networks, online stores, cars, home assistants and almost every other type of electronic device. Given the power and pervasiveness of these technologies, the question of whether AI helps and empowers or exploits and excludes will have a huge impact on the direction that our societies head over the coming decades.

It would be very easy for us to head in the wrong direction. As we have rushed to build data collection and automation into nearly everything, we have already seen the potential of AI to reinforce long-standing biases or to point us toward dangerous content. And there’s little transparency or accountability when an AI system spreads misinformation or misidentifies a face. Also, as people, we rarely have agency over what happens with our data or the automated decisions that it drives. If these trends continue, we’re likely to end up in a dystopian AI-driven world that deepens the gap between those with vast power and those without.

On the other hand, a significant number of people are calling attention to these dangerous trends — and saying ‘there is another way to do this!’ Much like the early days of open source, a growing movement of technologists, researchers, policy makers, lawyers and activists are working on ways to bend the future of computing towards agency and empowerment. They are developing software to detect AI bias. They are writing new data protection laws. They are inventing legal tools to put people in control of their own data. They are starting orgs that advocate for ethical and just AI. If these people — and Mozilla counts itself amongst them — are successful, we have the potential to create a world where AI broadly helps rather than harms humanity.

It was inspiring conversations with people like these that led Mozilla to focus the $20M+ that it spends each year on movement building on the topic of trustworthy AI. Over the course of 2020, we’ve been writing a paper titled “Creating Trustworthy AI” to document the challenges and ideas for action that have come up in these conversations. Today, we release the final version of this paper.

This ‘paper’ isn’t a traditional piece of research. It’s more like an action plan, laying out steps that Mozilla and other like-minded people could take to make trustworthy AI a reality. It is possible to make this kind of shift, just as we have been able to make the shift to clean water and safer automobiles in response to risks to people and society. The paper suggests the code we need to write, the projects we need to fund, the issues we need to champion, and the laws we need to pass. It’s a toolkit for technologists, for philanthropists, for activists, for lawmakers.

At the heart of the paper are eight big challenges the world is facing when it comes to the use of AI in the consumer internet technologies we all use everyday. These are things like: bias; privacy; transparency; security; and the centralization of AI power in the hands of a few big tech companies. The paper also outlines four opportunities to meet these challenges. These opportunities centre around the idea that there are developers, investors, policy makers and a broad public that want to make sure AI works differently — and to our benefit. Together, we have a chance to write code, process data, create laws and choose technologies that send us in a good direction.

Like any major Mozilla project, this paper was built using an open source approach. The draft we published in May came from 18 months of conversations, research and experimentation. We invited people to comment on that draft, and they did. People and organizations from around the world weighed in: from digital rights groups in Poland to civil rights activists in the U.S, from machine learning experts in North America to policy makers at the highest levels in Europe, from activists, writers and creators to ivy league professors. We have revised the paper based on this input to make it that much stronger. The feedback helped us hone our definitions of “AI” and “consumer technology.” It pushed us to make racial justice a more prominent lens throughout this work. And it led us to incorporate more geographic, racial, and gender diversity viewpoints in the paper.

In the months and years ahead, this document will serve as a blueprint for Mozilla Foundation’s movement building work, with a focus on research, advocacy and grantmaking. We’re already starting to manifest this work: Mozilla’s advocacy around YouTube recommendations has illuminated how problematic AI curation can be. The Data Futures Lab and European AI Fund that we are developing with partner foundations support projects and initiatives that reimagine how trustworthy AI is designed and built across multiple continents. And Mozilla Fellows and Awardees like Sylvie Delacroix, Deborah Raj, and Neema Iyer are studying how AI intersects with data governance, equality, and systemic bias. Past and present work like this also fed back into the white paper, helping us learn by doing.

We also hope that this work will open up new opportunities for the people who build the technology we use everyday. For so long, building technology that valued people was synonymous with collecting no or little data about them. While privacy remains a core focus of Mozilla and others, we need to find ways to protect and empower users that also include the collection and use of data to give people experiences they want. As the paper outlines, there are more and more developers — including many of our colleagues in the Mozilla Corporation — who are carving new paths that head in this direction.

Thank you for reading — and I look forward to putting this into action together.

The post Mozilla’s Vision for Trustworthy AI appeared first on The Mozilla Blog.

https://blog.mozilla.org/blog/2020/12/15/mozillas-vision-for-trustworthy-ai/


Daniel Stenberg: How my Twitter hijacks happened

Вторник, 15 Декабря 2020 г. 10:14 + в цитатник

You might recall that my Twitter account was hijacked and then again just two weeks later.

The first: brute-force

The first take-over was most likely a case of brute-forcing my weak password while not having 2FA enabled. I have no excuse for either of those lapses. I had convinced myself I had 2fa enabled which made me take a (too) lax attitude to my short 8-character password that was possible to remember. Clearly, 2fa was not enabled and then the only remaining wall against the evil world was that weak password.

The second time

After that first hijack, I immediately changed password to a strong many-character one and I made really sure I enabled 2fa with an authenticator app and I felt safe again. Yet it would only take seventeen days until I again was locked out from my account. This second time, I could see how someone had managed to change the email address associated with my account (displayed when I wanted to reset my password). With the password not working and the account not having the correct email address anymore, I could not reset the password, and my 2fa status had no effect. I was locked out. Again.

It felt related to the first case because I’ve had my Twitter account since May 2008. I had never lost it before and then suddenly after 12+ years, within a period of three weeks, it happens twice?

Why and how

How this happened was a complete mystery to me. The account was restored fairly swiftly but I learned nothing from that.

Then someone at Twitter contacted me. After they investigated what had happened and how, I had a chat with a responsible person there and he explained for me exactly how this went down.

Had Twitter been hacked? Is there a way to circumvent 2FA? Were my local computer or phone compromised? No, no and no.

Apparently, an agent at Twitter who were going through the backlog of issues, where my previous hijack issue was still present, accidentally changed the email on my account by mistake, probably confusing it with another account in another browser tab.

There was no outside intruder, it was just a user error.

Okay, the cynics will say, this is what he told me and there is no evidence to back it up. That’s right, I’m taking his words as truth here but I also think the description matches my observations. There’s just no way for me or any outsider to verify or fact-check this.

A brighter future

They seem to already have identified things to improve to reduce the risk of this happening again and Michael also mentioned a few other items on their agenda that should make hijacks harder to do and help them detect suspicious behavior earlier and faster going forward. I was also happy to provide my feedback on how I think they could’ve made my lost-account experience a little better.

I’m relieved that the second time at least wasn’t my fault and neither of my systems are breached or hacked (as far as I know).

I’ve also now properly and thoroughly gone over all my accounts on practically all online services I use and made really sure that I have 2fa enabled on them. On some of them I’ve also changed my registered email address to one with 30 random letters to make it truly impossible for any outsider to guess what I use.

(I’m also positively surprised by this extra level of customer care Twitter showed for me and my case.)

Am I a target?

I don’t think I am. I think maybe my Twitter account could be interesting to scammers since I have almost 25K followers and I have a verified account. Me personally, I work primarily with open source and most of my works is already made public. I don’t deal in business secrets. I don’t think my personal stuff attracts attackers more than anyone else does.

What about the risk or the temptation for bad guys in trying to backdoor curl? It is after all installed in some 10 billion systems world-wide. I’ve elaborated on that before. Summary: I think it is terribly hard for someone to actually manage to do it. Not because of the security of my personal systems perhaps, but because of the entire setup and all processes, signings, reviews, testing and scanning that are involved.

So no. I don’t think my personal systems are a valued singled out target to attackers.

Now, back to work!

Credits

Image by Gerd Altmann from Pixabay

https://daniel.haxx.se/blog/2020/12/15/how-my-twitter-hijacks-happened/


Hacks.Mozilla.Org: Welcome Yari: MDN Web Docs has a new platform

Понедельник, 14 Декабря 2020 г. 21:51 + в цитатник

After several intense months of work on such a significant change, the day is finally upon us: MDN Web Docs’ new platform (codenamed Yari) is finally launched!

icon for yari, includes a man with a spear, plus the text yari, the mdn web docs platform

Between November 2 and December 14, we ran a beta period in which a number of our fabulous community members tested out the new platform, submitted content changes, allowed us to try out the new contribution workflow, and suggested improvements to both the platform and styling. All of you have our heartfelt thanks.

This post serves to provide an update on where we are now, what we’re aiming to do next, and what you can do to help.

Where we are now

We’ve pulled together a working system in a short amount of time that vastly improves on the previous platform, and solves a number of tangible issues. There is certainly a lot of work still to do, but this new release provides a stable base to iterate from, and you’ll see a lot of further improvements in the coming months. Here’s a peek at where we are now:

Contributing in GitHub

The most significant difference with the new platform is that we’ve decentralized the content from a SQL database to files in a git repository. To edit content, you now submit pull requests against the https://github.com/mdn/content repo, rather than editing the wiki using the old WYSIWYG editor.

This has a huge advantage in terms of contribution workflow — because it’s a GitHub repo, you can insert it into your workflow however you feel comfortable, mass changes are easier to make programmatically, and you can lump together edits across multiple pages in a single pull request rather than as scattered individual edits, and we can apply intelligent automatic linting to edits to speed up work.

The content repo initially comes with a few basic CLI tools to help you with fundamental tasks, such as yarn start (to create a live preview of what your document will look like when rendered on MDN), yarn content create (to add a new page), yarn content move (to move an existing page), etc. You can find more details of these, and other contribution instructions, in the repo’s README file.

Caring for the community

Community interactions will not just be improved, but transformed. You can now have a conversation about a change over a pull request before it is finalized and submitted, making suggestions and iterating, rather than worrying about getting it perfect the first time round.

We think that this model will give contributors more confidence in making changes, and allow us to build a much better relationship with our community and help them improve their contributions.

Reducing developer burden

Our developer maintenance burden is also now much reduced with this update. The existing (Kuma) platform is complex,  hard to maintain, and adding new features is very difficult. The update will vastly simplify the platform code — we estimate that we can remove a significant chunk of the existing codebase, meaning easier maintenance and contributions.

This is also true of our front-end architecture: The existing MDN platform has a number of front-end inconsistencies and accessibility issues, which we’ve wanted to tackle for some time. The move to a new, simplified platform gives us a perfect opportunity to fix such issues.

What we’re doing next

There are a number of things that we could do to further improve the new platform going forward. Last week, for example, we already talked about our plans for the future of l10n on MDN.

The first thing we’ll be working on in the new year is ironing out the kinks in the new platform. After that, we can start to serve our readers and contributors much better than before, implementing new features faster and more confidently, which will lead to an even more useful MDN, with an even more powerful contribution model.

The sections below are by no means definite, but they do provide a useful idea of what we’ve got planned next for the platform. We are aiming to publish a public roadmap in the future, so that you can find out where we’re at, and make suggestions.

Moving to Markdown

At its launch, the content is stored in HTML format. This is OK — we all know a little HTML — but it is not the most convenient format to edit and write, especially if you are creating a sizable new page from scratch. Most people find Markdown easier to write than HTML, so we want to eventually move to storing our core content in Markdown (or maybe some other format) rather than HTML.

Improving search

For a long time, the search functionality has been substandard on MDN. Going forward, we not only want to upgrade our search to return useful results, but we also want to search more usefully, for example fuzzy search, search by popularity search by titles, summaries, full text search, and more.

Representing MDN meta pages

Currently only the MDN content pages are represented in our new content repo. We’d eventually like to stop using our old home, profile, and search pages, which are still currently served from our old Django-based platform, and bring those into the new platform with all the advantages that it brings.

And there’s more!

We’d also like to start exploring:

  • Optimizing file attachments
  • Implementing and enforcing CSP on MDN
  • Automated linting and formatting of all code snippets
  • Gradually removing the old-style KumaScript macros that remain in MDN content, removing, rendering, or replacing them as appropriate. For example, link macros can just be rendered out, as standard HTML links will work fine, whereas all the sidebar macros we have should be replaced by a proper sidebar system built into the actual platform.

What you can do to help

As you’ll have seen from the above next steps, there is still a lot to do, and we’d love the community to help us with future MDN content and platform work.

  • If you are more interested in helping with content work, you can find out how to help at Contributing to MDN.
  • If you are more interested in helping with MDN platform development, the best place to learn where to start is the Yari README.
  • In terms of finding a good general place to chat about MDN, you can join the discussion on the MDN Web Docs chat room on Matrix.

The post Welcome Yari: MDN Web Docs has a new platform appeared first on Mozilla Hacks - the Web developer blog.

https://hacks.mozilla.org/2020/12/welcome-yari-mdn-web-docs-has-a-new-platform/


The Mozilla Blog: Why getting voting right is hard, Part II: Hand-Counted Paper Ballots

Понедельник, 14 Декабря 2020 г. 20:42 + в цитатник

The Rust Programming Language Blog: Next steps for the Foundation Conversation

Понедельник, 14 Декабря 2020 г. 03:00 + в цитатник

Last week we kicked off the Foundation Conversation, a week-long period of Q&A forums and live broadcasts with the goal of explaining our vision for the Foundation and finding out what sorts of questions people had. We used those questions to help build a draft Foundation FAQ, and if you’ve not seen it yet, you should definitely take a look -- it’s chock full of good information. Thanks to everyone for asking such great questions!

We’ve created a new survey that asks about how people experienced the Foundation Conversation. Please take a moment to fill it out! We’re planning a similar event for this January, so your feedback will be really helpful.

This post is going to discuss how the Foundation and the Rust project relate to one another.

What is the central purpose of the Foundation?

At its core, the mission of the Foundation is to empower the Rust maintainers to joyfully do their best work. We think of the Foundation as working with the teams, helping them to create the scaffolding that people need to contribute and participate in the Rust project.

The scope and role of the Rust teams does not change

For most Rust teams, the creation of the Foundation doesn’t change anything about the scope of their work and decision making authority. The compiler team is still going to be maintaining the compiler, the community team will still be helping coordinate and mentor community events, and so forth. One exception is the Rust core team: there are various legal details that we expect to off-load onto the Foundation.

Let the Rust teams be their best selves

We are really excited for all the things that the Foundation will make possible for the Rust teams. We hope to draw on the Foundation to target some of the hardest problems in running an open-source project. We’re thinking of programs like offering training for maintainers, assistance with product and program management, access to trained mediators for conflict management, as well as facilitating events to help contributors get more high bandwidth communication (assuming, that is, we’re ever allowed to leave our houses again).

What comes next

This last week has been intense -- we calculated about 60 person hours of sync time answering questions -- and it’s been really valuable. The questions that everyone asked really helped us to refine and sharpen our thinking. For the remainder of the year we are going to be working hard on finalizing the details of the Foundation. We expect to launch the Foundation officially early next year! In the meantime, remember to fill out our survey!

https://blog.rust-lang.org/2020/12/14/Next-steps-for-the-foundation-conversation.html


Tarek Ziad'e: My journey at Mozilla

Понедельник, 14 Декабря 2020 г. 02:00 + в цитатник

During the spring of 2010, I applied for a job at Mozilla Labs. They were looking for a Python developer to re-write the Firefox Sync service (called Weave back then) into Python. They wanted to move all of their web services from PHP to Python, and looked for a Python …

https://ziade.org/2020/12/14/my-journey-at-mozilla/


Daniel Stenberg: the critical curl

Воскресенье, 13 Декабря 2020 г. 14:39 + в цитатник

Google has, as part of their involvement in the Open Source Security Foundation (OpnSSF), come up with a “Criticality Score” for open source projects.

It is a score between 0 (least critical) and 1 (most critical)

The input variables are:

  • time since project creation
  • time since last update
  • number of committers
  • number or organizations among the top committers
  • number of commits per week the last year
  • number of releases the last year
  • number of closed issues the last 90 days
  • number of updated issues the last 90 days
  • average number of comments per issue the last 90 days
  • number of project mentions in the commit messages

The best way to figure out exactly how to calculate the score based on these variables is to check out their github page.

The top-10 C based projects

The project has run the numbers on projects hosted on GitHub (which admittedly seriously limits the results) and they host these generated lists of the 200 most critical projects written in various languages.

Checking out the top list for C based projects, we can see the top 10 projects with the highest criticality scores being:

  1. git
  2. Linux (raspberry pi)
  3. Linux (torvald version)
  4. PHP
  5. OpenSSL
  6. systemd
  7. curl
  8. u-boot
  9. qemu
  10. mbed-os

What now then?

After having created the scoring system and generated lists, step 3 is said to be “Use this data to proactively improve the security posture of these critical projects.“.

Now I think we have a pretty strong effort on security already in curl and Google helped us strengthen it even more recently, but I figure we can never have too much help or focus on improving our project.

Credits

Image by Thaliesin from Pixabay

https://daniel.haxx.se/blog/2020/12/13/the-critical-curl/


Mozilla Performance Blog: Performance Sheriff Newsletter (November 2020)

Пятница, 11 Декабря 2020 г. 21:33 + в цитатник

Cameron Kaiser: Unexpected FPR30 changes because 2020

Пятница, 11 Декабря 2020 г. 08:36 + в цитатник
Well, there were more casualties from the Great Floodgap Power Supply Kablooey of 2020, and one notable additional casualty, because 2020, natch, was my trusty former daily driver Quad G5. Yes, this is the machine that among other tasks builds TenFourFox. The issue is recoverable and I think I can get everything back in order, but due to work and the extent of what appears gone wrong it won't happen before the planned FPR30 release date on December 15 (taking into account it takes about 30 hours to run a complete build cycle).

If you've read this blog for any length of time, you know how much I like to be punctual with releases to parallel mainstream Firefox. However, there have been no reported problems from the beta and there are no major security issues that must be patched immediately, so there's a simple workaround: on Monday night Pacific time the beta will simply become the release. If you're already using the beta, then just keep on using it. Since I was already intending to do a security-only release after FPR30 and I wasn't planning to issue a beta for it anyway, anything left over from FPR30 will get rolled into FPR30 SPR1 and this will give me enough cushion to get the G5 back in working order (or at least dust off the spare) for that release on or about January 26. I'm sure all of you will get over it by then. :)

http://tenfourfox.blogspot.com/2020/12/unexpected-esr30-changes-because-2020.html


Wladimir Palant: How anti-fingerprinting extensions tend to make fingerprinting easier

Четверг, 10 Декабря 2020 г. 16:57 + в цитатник

Do you have a privacy protection extension installed in your browser? There are so many around, and every security vendor is promoting their own. Typically, these will provide a feature called “anti-fingerprinting” or “fingerprint protection” which is supposed to make you less identifiable on the web. What you won’t notice: this feature is almost universally flawed, potentially allowing even better fingerprinting.

Pig disguised as a bird but still clearly recognizable
Image credits: OpenClipart

I’ve seen a number of extensions misimplement this functionality, yet I rarely bother to write a report. The effort to fully explain the problem is considerable. On the other hand, it is obvious that for most vendors privacy protection is merely a check that they can put on their feature list. Quality does not matter because no user will be able to tell whether their solution actually worked. With minimal resources available, my issue report is unlikely to cause a meaningful action.

That’s why I decided to explain the issues in a blog post, a typical extension will have at least three out of four. Next time I run across a browser extension suffering from all the same flaws I can send them a link to this post. And maybe some vendors will resolve the issues then. Or, even better, not even make these mistakes in the first place.

How fingerprinting works

When you browse the web, you aren’t merely interacting with the website you are visiting but also with numerous third parties. Many of these have a huge interest in recognizing you reliably across different websites, advertisers for example want to “personalize” your ads. The traditional approach is storing a cookie in your browser which contains your unique identifier. However, modern browsers have a highly recommendable setting to clear cookies at the end of the browsing session. There is private browsing mode where no cookies are stored permanently. Further technical restrictions for third-party cookies are expected to be implemented soon, and EU data protection rules also make storing cookies complicated to say the least.

So cookies are becoming increasingly unreliable. Fingerprinting is supposed to solve this issue by recognizing individual users without storing any data on their end. The idea is to look at data about user’s system that browsers make available anyway, for example display resolution. It doesn’t matter what the data is, it should be:

  • sufficiently stable, ideally stay unchanged for months
  • unique to a sufficiently small group of people

Note that no data point needs to identify a single person by itself. If each of them refer to a different group of people, with enough data points the intersection of all these groups will always be a single person.

How anti-fingerprinting is supposed to work

The goal of anti-fingerprinting is reducing the amount and quality of data that can be used for fingerprinting. For example, CSS used to allow recognizing websites that the user visited before – a design flaw that could be used for fingerprinting among other things. It took quite some time and effort, but eventually the browsers found a fix that wouldn’t break the web. Today this data point is no longer available to websites.

Other data points remain but have been defused considerably. For example, browsers provide websites with a user agent string so that these know e.g. which browser brand and version they are dealing with. Applications installed by the users used to extend this user agent string with their own identifiers. Eventually, browser vendors recognized how this could be misused for fingerprinting and decided to remove any third-party additions. Much of the other information originally available here has been removed as well, so that today any user agent string is usually common to a large group of people.

Barking the wrong tree

Browser vendors have already invested a considerable amount of work into anti-fingerprinting. However, they usually limited themselves to measures which wouldn’t break existing websites. And while things like display resolution (unlike window size) aren’t considered by too many websites, these were apparently numerous enough that browsers still give them user’s display resolution and the available space (typically display resolution without the taskbar).

Privacy protection extensions on the other hand aren’t showing as much concern. So they will typically do something like:

screen.width = 1280;
screen.height = 1024;

There you go, the website will now see the same display resolution for everybody, right? Well, that’s unless the website does this:

delete screen.width;
delete screen.height;

And suddenly screen.width and screen.height are restored to their original values. Fingerprinting can now use two data points instead of one: not merely the real display resolution but also the fake one. Even if that fake display resolution were extremely common, it would still make the fingerprint slightly more precise.

Is this magic? No, just how JavaScript prototypes work. See, these properties are not defined on the screen object itself, they are part of the object’s prototype. So that privacy extension added an override for prototype’s properties. With the override removed the original properties became visible again.

So is this the correct way to do it?

Object.defineProperty(Screen.prototype, "width", {value: 1280});
Object.defineProperty(Screen.prototype, "height", {value: 1024});

Much better. The website can no longer retrieve the original value easily. However, it can detect that the value has been manipulated by calling Object.getOwnPropertyDescriptor(Screen.prototype, "width"). Normally the resulting property descriptor would contain a getter, this one has a static value however. And the fact that a privacy extension is messing with the values is again a usable data point.

Let’s try it without changing the property descriptor:

Object.defineProperty(Screen.prototype, "width", {get: () => 1280});
Object.defineProperty(Screen.prototype, "height", {get: () => 1024});

Almost there. But now the website can call Object.getOwnPropertyDescriptor(Screen.prototype, "width").get.toString() to see the source code of our getter. Again a data point which could be used for fingerprinting. The source code needs to be hidden:

Object.defineProperty(Screen.prototype, "width", {get: (() => 1280).bind(null)});
Object.defineProperty(Screen.prototype, "height", {get: (() => 1024).bind(null)});

This bind() call makes sure the getter looks like a native function. Exactly what we needed.

Catching all those pesky frames

There is a complication here: a website doesn’t merely have one JavaScript execution context, it has one for each frame. So you have to make sure your content script runs in all these frames. And so browser extensions will typically specify "all_frames": true in their manifest. And that’s correct. But then the website does something like this:

var frame = document.createElement("iframe");
document.body.appendChild(frame);
console.log(screen.width, frame.contentWindow.screen.width);

Why is this newly created frame still reporting the original display width? We are back at square one: the website again has two data points to work with instead of one.

The problem here: if frame location isn’t set, the default is to load the special page about:blank. When Chrome developers created their extension APIs originally they didn’t give extensions any way to run content scripts here. Luckily, this loophole has been closed by now, but the extension manifest has to set "match_about_blank": true as well.

Timing woes

As anti-fingerprinting functionality in browser extensions is rather invasive, it is prone to breaking websites. So it is important to let users disable this functionality on specific websites. This is why you will often see code like this in extension content scripts:

chrome.runtime.sendMessage("AmIEnabled", function(enabled)
{
  if (enabled)
    init();
});

So rather than initializing all the anti-fingerprinting measures immediately, this content script first waits for the extension’s background page to tell it whether it is actually supposed to do anything. This gives the website the necessary time to store all the relevant values before these are changed. It could even come back later and check out the modified values as well – once again, two data points are better than one.

This is an important limitation of Chrome’s extension architecture which is sadly shared by all browsers today. It is possible to run a content script before any webpage scripts can run ("run_at": "document_start"). This will only be a static script however, not knowing any of the extension state. And requesting extension state takes time.

This might eventually get solved by dynamic content script support, a request originally created ten years ago. In the meantime however, it seems that the only viable solution is to initialize anti-fingerprinting immediately. If the extension later says “no, you are disabled” – well, then the content script will just have to undo all manipulations. But this approach makes sure that in the common scenario (functionality is enabled) websites won’t see two variants of the same data.

The art of faking

Let’s say that all the technical issues are solved. The mechanism for installing fake values works flawlessly. This still leaves a question: how does one choose the “right” fake value?

How about choosing a random value? My display resolution is 1661x3351, now fingerprint that! As funny as this is, fingerprinting doesn’t rely on data that makes sense. All it needs is data that is stable and sufficiently unique. And that display resolution is certainly extremely unique. Now one could come up with schemes to change this value regularly, but fact is: making users stand out isn’t the right way.

What you’d rather want is finding the largest group out there and joining it. My display resolution is 1920x1080 – just the common Full HD, nothing to see here! Want to know my available display space? I have my Windows taskbar at the bottom, just like everyone else. No, I didn’t resize it either. I’m just your average Joe.

The only trouble with this approach: the values have to be re-evaluated regularly. Two decades ago, 1024x768 was the most common display resolution and a good choice for anti-fingerprinting. Today, someone claiming to have this screen size would certainly stick out. Similarly, in my website logs visitors claiming to use Firefox 48 are noticeable: it might have been a common browser version some years ago, but today it’s usually bots merely pretending to be website visitors.

https://palant.info/2020/12/10/how-anti-fingerprinting-extensions-tend-to-make-fingerprinting-easier/


Mike Taylor: Differences in cookie length (size?) restrictions

Четверг, 10 Декабря 2020 г. 09:00 + в цитатник

I was digging through some of the old http-state tests (which got ported into web-platform-tests, and which I’m rewriting to be more modern and, mostly work?) and noticed an interesting difference between Chrome and Firefox in disabled-chromium0020-test (no idea why it’s called disabled and not, in fact, disabled).

That test looks something like:

Set-Cookie: aaaaaaaaaaaaa....(repeating a's for seemingly forever)

But first, a little background on expected behavior so you can begin to care.

rfc6265 talks about cookie size limits like so:

At least 4096 bytes per cookie (as measured by the sum of the length of the cookie’s name, value, and attributes).

(It’s actually trying to say at most, which confuses me, but a lot of things confuse me on the daily.)

So in my re-written version of disabled-chromium0020-test I’ve got (just assume a function that consumes this object and does something useful):

{
  // 7 + 4089 = 4096
  cookie: `test=11${"a".repeat(4089)}`,
  expected: `test=11${"a".repeat(4089)}`,
  name: "Set cookie with large value ( = 4kb)",
},

Firefox and Chrome are happy to set that cookie. Fantastic. So naturally we want to test a cookie with 4097 bytes and make sure the cookie gets ignored:

// 7 + 4091 = 4098
{
  cookie: `test=12${"a".repeat(4091)}`,
  expected: "",
  name: "Ignore cookie with large value ( > 4kb)",
},

If you’re paying attention, and good at like, reading and math, you’ll notice that 4096 + 1 is not 4098. A+ work.

What I discovered, much in the same way that Columbus discovered Texas, is that a “cookie string” that is 4097 bytes long currently has different behaviors in Firefox and Chrome (and probably most browsers, TBQH). Firefox (sort of correctly, according to the current spec language, if you ignore attributes) will only consider the name length + the value length, while Chrome will consider the entire cookie string including name, =, value, and all the attributes when enforcing the limit.

I’m going to include the current implementations here, because it makes me look smart (and I’m trying to juice SEO):

Gecko (which sets kMaxBytesPerCookie to 4096):

bool CookieCommons::CheckNameAndValueSize(const CookieStruct& aCookieData) {
  // reject cookie if it's over the size limit, per RFC2109
  return (aCookieData.name().Length() + aCookieData.value().Length()) <=
         kMaxBytesPerCookie;
}

Chromium (which sets kMaxCookieSize to 4096):

ParsedCookie::ParsedCookie(const std::string& cookie_line) {
  if (cookie_line.size() > kMaxCookieSize) {
    DVLOG(1) << "Not parsing cookie, too large: " << cookie_line.size();
    return;
  }

  ParseTokenValuePairs(cookie_line);
  if (!pairs_.empty())
    SetupAttributes();
}

Neither are really following what the spec says, so until the spec tightens that down (it’s currently only a SHOULD-level requirement which is how you say pretty-please in a game of telephone from browser engineers to computers), I bumped the test up to 4098 bytes so both browsers consistently ignore the cookie.

(I spent about 30 seconds testing in Safari, and it seems to match Firefox at least for the 4097 “cookie string” limit, but who knows what it does with attributes.)

https://miketaylr.com/posts/2020/12/differences-in-cookie-length-restrictions.html


Cameron Kaiser: Floodgap downtime fixed

Четверг, 10 Декабря 2020 г. 01:32 + в цитатник
I assume some of you will have noticed that Floodgap was down for a couple of days -- though I wouldn't know, since it wasn't receiving E-mail during the downtime. Being 2020 the problem turned out to be a cavalcade of simultaneous major failures including the complete loss of the main network backbone's power supply. Thus is the occasional "joy" of running a home server room. It is now on a backup rerouted supply while new parts are ordered and all services including TenFourFox and gopher.floodgap.com should be back up and running. Note that there will be some reduced redundancy until I can effect definitive repairs but most users shouldn't be affected.

http://tenfourfox.blogspot.com/2020/12/floodgap-downtime-fixed.html


Mozilla Open Policy & Advocacy Blog: Mozilla teams up with Twitter, Automattic, and Vimeo to provide recommendations on EU content responsibility

Среда, 09 Декабря 2020 г. 11:30 + в цитатник

The European Commission will soon unveil its landmark Digital Services Act draft law, that will set out a vision for the future of online content responsibility in the EU. We’ve joined up with Twitter, Auttomattic, and Vimeo to provide recommendations on how the EU’s novel proposals can ensure a more thoughtful approach to addressing illegal and harmful content in the EU, in a way that tackles online harms while safeguarding smaller companies’ ability to compete.

As we note in our letter,

“The present conversation is too often framed through the prism of content removal alone, where success is judged solely in terms of ever-more content removal in ever-shorter periods of time.

Without question, illegal content – including terrorist content and child sexual abuse material – must be removed expeditiously. Yet by limiting policy options to a solely stay up-come down binary, we forgo promising alternatives that could better address the spread and impact of problematic content while safeguarding rights and the potential for smaller companies to compete.

Indeed, removing content cannot be the sole paradigm of Internet policy, particularly when concerned with the phenomenon of ‘legal-but-harmful’ content. Such an approach would benefit only the very largest companies in our industry.

We therefore encourage a content moderation discussion that emphasises the difference between illegal and harmful content and highlights the potential of interventions that address how content is surfaced and discovered. Included in this is how consumers are offered real choice in the curation of their online environment.”

We look forward to working with lawmakers in the EU to help bring this vision for a healthier internet to fruition in the upcoming Digital Services Act deliberations.

You can read the full letter to EU lawmakers here and more background on our engagement with the EU DSA here.

The post Mozilla teams up with Twitter, Automattic, and Vimeo to provide recommendations on EU content responsibility appeared first on Open Policy & Advocacy.

https://blog.mozilla.org/netpolicy/2020/12/09/mozilla-teams-up-with-twitter-automattic-and-vimeo-to-provide-recommendations-on-eu-content-responsibility/


Daniel Stenberg: curl 7.74.0 with HSTS

Среда, 09 Декабря 2020 г. 09:51 + в цитатник

Welcome to another curl release, 56 days since the previous one.

Release presentation

Numbers

the 196th release
1 change
56 days (total: 8,301)

107 bug fixes (total: 6,569)
167 commits (total: 26,484)
0 new public libcurl function (total: 85)
6 new curl_easy_setopt() option (total: 284)

1 new curl command line option (total: 235)
46 contributors, 22 new (total: 2,292)
22 authors, 8 new (total: 843)
3 security fixes (total: 98)
1,600 USD paid in Bug Bounties (total: 4,400 USD)

Security

This time around we have no less than three vulnerabilities fixed and as shown above we’ve paid 1,600 USD in reward money this time, out of which the reporter of the CVE-2020-8286 issue got the new record amount 900 USD. The second one didn’t get any reward simply because it was not claimed. In this single release we doubled the number of vulnerabilities we’ve published this year!

The six announced CVEs during 2020 still means this has been a better year than each of the six previous years (2014-2019) and we have to go all the way back to 2013 to find a year with fewer CVEs reported.

I’m very happy and proud that we as an small independent open source project can reward these skilled security researchers like this. Much thanks to our generous sponsors of course.

CVE-2020-8284: trusting FTP PASV responses

When curl performs a passive FTP transfer, it first tries the EPSV command and if that is not supported, it falls back to using PASV. Passive mode is what curl uses by default.

A server response to a PASV command includes the (IPv4) address and port number for the client to connect back to in order to perform the actual data transfer.

This is how the FTP protocol is designed to work.

A malicious server can use the PASV response to trick curl into connecting back to a given IP address and port, and this way potentially make curl extract information about services that are otherwise private and not disclosed, for example doing port scanning and service banner extractions.

If curl operates on a URL provided by a user (which by all means is an unwise setup), a user can exploit that and pass in a URL to a malicious FTP server instance without needing any server breach to perform the attack.

There’s no really good solution or fix to this, as this is how FTP works, but starting in curl 7.74.0, curl will default to ignoring the IP address in the PASV response and instead just use the address it already uses for the control connection. In other words, we will enable the CURLOPT_FTP_SKIP_PASV_IP option by default! This will cause problems for some rare use cases (which then have to disable this), but we still think it’s worth doing.

CVE-2020-8285: FTP wildcard stack overflow

libcurl offers a wildcard matching functionality, which allows a callback (set with CURLOPT_CHUNK_BGN_FUNCTION) to return information back to libcurl on how to handle a specific entry in a directory when libcurl iterates over a list of all available entries.

When this callback returns CURL_CHUNK_BGN_FUNC_SKIP, to tell libcurl to not deal with that file, the internal function in libcurl then calls itself recursively to handle the next directory entry.

If there’s a sufficient amount of file entries and if the callback returns “skip” enough number of times, libcurl runs out of stack space. The exact amount will of course vary with platforms, compilers and other environmental factors.

The content of the remote directory is not kept on the stack, so it seems hard for the attacker to control exactly what data that overwrites the stack – however it remains a Denial-Of-Service vector as a malicious user who controls a server that a libcurl-using application works with under these premises can trigger a crash.

CVE-2020-8286: Inferior OCSP verification

libcurl offers “OCSP stapling” via the CURLOPT_SSL_VERIFYSTATUS option. When set, libcurl verifies the OCSP response that a server responds with as part of the TLS handshake. It then aborts the TLS negotiation if something is wrong with the response. The same feature can be enabled with --cert-status using the curl tool.

As part of the OCSP response verification, a client should verify that the response is indeed set out for the correct certificate. This step was not performed by libcurl when built or told to use OpenSSL as TLS backend.

This flaw would allow an attacker, who perhaps could have breached a TLS server, to provide a fraudulent OCSP response that would appear fine, instead of the real one. Like if the original certificate actually has been revoked.

Change

There’s really only one “change” this time, and it is an experimental one which means you need to enable it explicitly in the build to get to try it out. We discourage people from using this in production until we no longer consider it experimental but we will of course appreciate feedback on it and help to perfect it.

The change in this release introduces no less than 6 new easy setopts for the library and one command line option: support HTTP Strict-Transport-Security, also known as HSTS. This is a system for HTTPS hosts to tell clients to attempt to contact them over insecure methods (ie clear text HTTP).

One entry-point to the libcurl options for HSTS is the CURLOPT_HSTS_CTRL man page.

Bug-fixes

Yet another release with over one hundred bug-fixes accounted for. I’ve selected a few interesting ones that I decided to highlight below.

enable alt-svc in the build by default

We landed the code and support for alt-svc: headers in early 2019 marked as “experimental”. We feel the time has come for this little baby to grow up and step out into the real world so we removed the labeling and we made sure the support is enabled by default in builds (you can still disable it if you want).

8 cmake fixes bring cmake closer to autotools level

In curl 7.73.0 we removed the “scary warning” from the cmake build that warned users that the cmake build setup might be inferior. The goal was to get more people to use it, and then by extension help out to fix it. The trick might have worked and we’ve gotten several improvements to the cmake build in this cycle. More over, we’ve gotten a whole slew of new bug reports on it as well so now we have a list of known cmake issues in the KNOWN_BUGS document, ready for interested contributors to dig into!

configure now uses pkg-config to find openSSL when cross-compiling

Just one of those tiny weird things. At some point in the past someone had trouble building OpenSSL cross-compiled when pkg-config was used so it got disabled. I don’t recall the details. This time someone had the reversed problem so now the configure script was fixed again to properly use pkg-config even when cross-compiling…

curl.se is the new home

You know it.

curl: only warn not fail, if not finding the home dir

The curl tool attempts to find the user’s home dir, the user who invokes the command, in order to look for some files there. For example the .curlrc file. More importantly, when doing SSH related protocol it is somewhat important to find the file ~/.ssh/known_hosts. So important that the tool would abort if not found. Still, a command line can still work without that during various circumstances and in particular if -k is used so bailing out like that was nothing but wrong…

curl_easy_escape: limit output string length to 3 * max input

In general, libcurl enforces an internal string length limit that prevents any string to grow larger than 8MB. This is done to prevent mistakes or abuse. Due a mistake, the string length limit was enforced wrongly in the curl_easy_escape function which could make the limit a third of the intended size: 2.67 MB.

only set USE_RESOLVE_ON_IPS for Apple’s native resolver use

This define is set internally when the resolver function is used even when a plain IP address is given. On macOS for example, the resolver functions are used to do some conversions and thus this is necessary, while for other resolver libraries we avoid the resolver call when we can convert the IP number to binary internally more effectively.

By a mistake we had enabled this “call getaddrinfo() anyway”-logic even when curl was built to use c-ares on macOS.

fix memory leaks in GnuTLS backend

We used two functions to extract information from the server certificate that didn’t properly free the memory after use. We’ve filed subsequent bug reports in the GnuTLS project asking them to make the required steps much clearer in their documentation so that perhaps other projects can avoid the same mistake going forward.

libssh2: fix transport over HTTPS proxy

SFTP file transfers didn’t work correctly since previous fixes obviously weren’t thorough enough. This fix has been confirmed fine in use.

make curl –retry work for HTTP 408 responses too

Again. We made the --retry logic work for 408 once before, but for some inexplicable reasons the support for that was accidentally dropped when we introduced parallel transfer support in curl. Regression fixed!

use OPENSSL_init_ssl() with >= 1.1.0

Initializing the OpenSSL library the correct way is a task that sounds easy but always been a source for problems and misunderstandings and it has never been properly documented. It is a long and boring story that has been going on for a very long time. This time, we add yet another chapter to this novel when we start using this function call when OpenSSL 1.1.0 or later (or BoringSSL) is used in the build. Hopefully, this is one of the last chapters in this book.

“scheme-less URLs” not longer accept blank port number

curl operates on “URLs”, but as a special shortcut it also supports URLs without the scheme. For example just a plain host name. Such input isn’t at all by any standards an actual URL or URI; curl was made to handle such input to mimic how browsers work. curl “guesses” what scheme the given name is meant to have, and for most names it will go with HTTP.

Further, a URL can provide a specific port number using a colon and a port number following the host name, like “hostname:80” and the path then follows the port number: “hostname:80/path“. To complicate matters, the port number can be blank, and the path can start with more than one slash: “hostname://path“.

curl’s logic that determines if a given input string has a scheme present checks the first 40 bytes of the string for a :// sequence and if that is deemed not present, curl determines that this is a scheme-less host name.

This means [39-letter string]:// as input is treated as a URL with a scheme and a scheme that curl doesn’t know about and therefore is rejected as an input, while [40-letter string]:// is considered a host name with a blank port number field and a path that starts with double slash!

In 7.74.0 we remove that potentially confusing difference. If the URL is determined to not have a scheme, it will not be accepted if it also has a blank port number!

https://daniel.haxx.se/blog/2020/12/09/curl-7-74-0-with-hsts/


Firefox Nightly: These Weeks in Firefox: Issue 84

Среда, 09 Декабря 2020 г. 00:46 + в цитатник

The Mozilla Blog: Why getting voting right is hard, Part I: Introduction and Requirements

Вторник, 08 Декабря 2020 г. 21:24 + в цитатник

Hacks.Mozilla.Org: An update on MDN Web Docs’ localization strategy

Вторник, 08 Декабря 2020 г. 19:20 + в цитатник

In our previous post — MDN Web Docs evolves! Lowdown on the upcoming new platform — we talked about many aspects of the new MDN Web Docs platform that we’re launching on December 14th. In this post, we’ll look at one aspect in more detail — how we are handling localization going forward. We’ll talk about how our thinking has changed since our previous post, and detail our updated course of action.

Updated course of action

Based on thoughtful feedback from the community, we did some additional investigation and determined a stronger, clearer path forward.

First of all, we want to keep a clear focus on work leading up to the launch of our new platform, and making sure the overall system works smoothly. This means that upon launch, we still plan to display translations in all existing locales, but they will all initially be frozen — read-only, not editable.

We were considering automated translations as the main way forward. One key issue was that automated translations into European languages are seen as an acceptable solution, but automated translations into CJK languages are far from ideal — they have a very different structure to English and European languages, plus many Europeans are able to read English well enough to fall back on English documentation when required, whereas some CJK communities do not commonly read English so do not have that luxury.

Many folks we talked to said that automated translations wouldn’t be acceptable in their languages. Not only would they be substandard, but a lot of MDN Web Docs communities center around translating documents. If manual translations went away, those vibrant and highly involved communities would probably go away — something we certainly want to avoid!

We are therefore focusing on limited manual translations as our main way forward instead, looking to unfreeze a number of key locales as soon as possible after the new platform launch.

Limited manual translations

Rigorous testing has been done, and it looks like building translated content as part of the main build process is doable. We are separating locales into two tiers in order to determine which will be unfrozen and which will remain locked.

  • Tier 1 locales will be unfrozen and manually editable via pull requests. These locales are required to have at least one representative who will act as a community lead. The community members will be responsible for monitoring the localized pages, updating translations of key content once the English versions are changed, reviewing edits, etc. The community lead will additionally be in charge of making decisions related to that locale, and acting as a point of contact between the community and the MDN staff team.
  • Tier 2 locales will be frozen, and not accept pull requests, because they have no community to maintain them.

The Tier 1 locales we are starting with unfreezing are:

  • Simplified Chinese (zh-CN)
  • Traditional Chinese (zh-TW)
  • French (fr)
  • Japanese (ja)

If you wish for a Tier 2 locale to be unfrozen, then you need to come to us with a proposal, including evidence of an active team willing to be responsible for the work associated with that locale. If this is the case, then we can promote the locale to Tier 1, and you can start work.

We will monitor the activity on the Tier 1 locales. If a Tier 1 locale is not being maintained by its community, we shall demote it to Tier 2 after a certain period of time, and it will become frozen again.

We are looking at this new system as a reasonable compromise — providing a path for you the community to continue work on MDN translations providing the interest is there, while also ensuring that locale maintenance is viable, and content won’t get any further out of date. With most locales unmaintained, changes weren’t being reviewed effectively, and readers of those locales were often confused between using their preferred locale or English, their experience suffering as a result.

Review process

The review process will be quite simple.

  • The content for each Tier 1 locale will be kept in its own separate repo.
  • When a PR is made against that repo, the localization community will be pinged for a review.
  • When the content has been reviewed, an MDN admin will be pinged to merge the change. We should be able to set up the system so that this happens automatically.
  • There will also be some user-submitted content bugs filed at https://github.com/mdn/sprints/issues, as well as on the issue trackers for each locale repo. When triaged, the “sprints” issues will be assigned to the relevant localization team to fix, but the relevant localization team is responsible for triaging and resolving issues filed on their own repo.

Machine translations alongside manual translations

We previously talked about the potential involvement of machine translations to enhance the new localization process. We still have this in mind, but we are looking to keep the initial system simple, in order to make it achievable. The next step in Q1 2021 will be to start looking into how we could most effectively make use of machine translations. We’ll give you another update in mid-Q1, once we’ve made more progress.

The post An update on MDN Web Docs’ localization strategy appeared first on Mozilla Hacks - the Web developer blog.

https://hacks.mozilla.org/2020/12/an-update-on-mdn-web-docs-localization-strategy/


Mozilla Attack & Defense: Guest Blog Post: Good First Steps to Find Security Bugs in Fenix (Part 1)

Вторник, 08 Декабря 2020 г. 18:17 + в цитатник

This blog post is the first of several guest blog posts we’ll be publishing, where we invite participants of our bug bounty program to write about bugs they’ve reported to us.

Fenix is a newly designed Firefox for Android that officially launched in August 2020. In Fenix, many components required to run as an Android app have been rebuilt from scratch, and various new features are being implemented as well. While they are re-implementing features, security bugs fixed in the past may be introduced again. If you care about the open web and you want to participate in the Client Bug Bounty Program of Mozilla, Fenix is a good target to start with.

Let’s take a look at two bugs I found in the firefox: scheme that is supported by Fenix.

Bugs Came Again with Deep Links

Fenix provides an interesting custom scheme URL firefox://open?url= that can open any specified URL in a new tab. On Android, a deep link is a link that takes you directly to a specific part of an app; and the firefox:open deep link is not intended to be called from web content, but its access was not restricted.

Web Content should not be able to link directly to a file:// URL (although a user can type or copy/paste such a link into the address bar.) While Firefox on the Desktop has long-implemented this fix, Fenix did not – I submitted Bug 1656747 that exploited this behavior and navigated to a local file from web content with the following hyperlink:

Go

But actually, the same bug affected the older Firefox for Android (unofficially referred to as Fennec) and was filed three years ago Bug 1380950.

Likewise, security researcher Jun Kokatsu reported Bug 1447853, which was an sandbox bypass in Firefox for iOS. He also abused the same type of deep link URL for bypassing the popup block brought by sandbox.

[iflash=400,300,data:text/html,a>" sandbox>

I found this attack scenario in a test file of Firefox for iOS and I re-tested it in Fenix. I submitted Bug 1656746 which is the same issue as what he found.

Conclusion

As you can see, retesting past attack scenarios can be a good starting point. We can find past vulnerabilities from the Mozilla Foundation Security Advisories. By examining histories accumulated over a decade, we can see what are considered security bugs and how they were resolved. These resources will be useful for retesting past bugs as well as finding attack vectors for newly introduced features.

Have a good bug hunt!

https://blog.mozilla.org/attack-and-defense/2020/12/08/good-first-steps-in-fenix-part-1/


The Mozilla Blog: State of Mozilla 2019-2020: Annual Impact Report

Понедельник, 07 Декабря 2020 г. 19:01 + в цитатник

2020 has been a year like few others with the internet’s value and necessity front and center. The State of Mozilla for 2019-2020 makes clear that Mozilla’s mission and role in the world is more important than ever. Dive into the full report by clicking on the image below.

2019–2020 State of Mozilla

About the State of Mozilla

Mozilla releases the State of Mozilla annually. This impact report outlines how Mozilla’s products, services, advocacy and engagement have influenced technology and society over the past year. The State of Mozilla also includes details on Mozilla’s finances as a way of further demonstrating how Mozilla uses the power of its unique structure and resources to achieve its mission — an internet that is open and accessible to all.

The post State of Mozilla 2019-2020: Annual Impact Report appeared first on The Mozilla Blog.

https://blog.mozilla.org/blog/2020/12/07/state-of-mozilla-2019-annual-report/



Поиск сообщений в rss_planet_mozilla
Страницы: 472 ... 446 445 [444] 443 442 ..
.. 1 Календарь