-Поиск по дневнику

Поиск сообщений в rss_planet_mozilla

 -Подписка по e-mail

 

 -Постоянные читатели

 -Статистика

Статистика LiveInternet.ru: показано количество хитов и посетителей
Создан: 19.06.2007
Записей:
Комментариев:
Написано: 7

Planet Mozilla





Planet Mozilla - https://planet.mozilla.org/


Добавить любой RSS - источник (включая журнал LiveJournal) в свою ленту друзей вы можете на странице синдикации.

Исходная информация - http://planet.mozilla.org/.
Данный дневник сформирован из открытого RSS-источника по адресу http://planet.mozilla.org/rss20.xml, и дополняется в соответствии с дополнением данного источника. Он может не соответствовать содержимому оригинальной страницы. Трансляция создана автоматически по запросу читателей этой RSS ленты.
По всем вопросам о работе данного сервиса обращаться со страницы контактной информации.

[Обновить трансляцию]

Giorgio Maone: WebExtensions FAQ

Среда, 26 Августа 2015 г. 02:36 + в цитатник

WebExtensions are making some people happy, some people angry, many people ask questions.
Some of the answers can be found here, more to come as add-on developers keep discussing this hot topic.
My favourite one: No, your add-ons' ability and your own creativity won't be limited by the new API.

https://hackademix.net/2015/08/26/webextensions-faq/


Michael Kaply: Using Hidden UI in the CCK2

Вторник, 25 Августа 2015 г. 22:56 + в цитатник

One of the questions I get asked the most is how to hide certain UI elements of Firefox. I implemented the Hidden UI feature of the CCK2 specifically to address this problem. Using it can be a little daunting, though, so I wanted to take some time to give folks the basics.

The Hidden UI feature relies on CSS selectors. We can use CSS Selectors to specify any element in the Firefox user interface and then that element will be hidden. The trick is figuring out the selectors. To accomplish this, my primary tool is the DOM Inspector. With the DOM Inspector, I can look at any element in the Firefox user interface and determine it's ID. Once I have it's ID, I can usually specify the CSS selector as #ID and I can hide that element. Let's walk through using the DOM Inspector to figure out the ID of the home button.

  • Install the DOM Inspector
  • Go to Developer Tools and select DOM Inspector
  • From the DOM Inspector Window, select File->Inspect Chrome Document and select the first window
  • In the DOM Inspector Window, click on the Node Finder.
  • Click on the Home button in the Firefox Window.
  • You'll see results in the DOM Inspector that look like this:

  • This gives us something unique we can use - an ID. So #home-button in Hidden UI will hide the home button.

    You can use this method for just about every aspect of the Firefox UI except for menus and the Australis panel. For these items, I turn to the Firefox source code.

    If you want to hide anything on the Australis panel, you can look for IDs here. If you want to hide anything on the Firefox context menu, you can look here. If you want to hide anything in the menu bar, you can look here.

    As a last resort, you can simply hide menuitems based on their text. For instance, if you wanted to hide the Customize menu that appears when you right click on a toolbar, you could specify a selector of menuitem[label^='Customize]. This says "Hide any menu item that begins with the word Customize." Don't try to include the ellipsis in your selector because in most cases it's not ..., it's the unicode ellipsis (…). (Incidentally, that menu is defined here, along with the rest of the toolbar popup menu. Because it doesn't have an ID, you'll have to use menuitem.viewCustomizeToolbar.)

    Hopefully this should get everyone started. If there's something you can't figure out how to hide, let me know. And if you're trying to hide everything, you should probably be looking at a kiosk solution, not the CCK2...

https://mike.kaply.com/2015/08/25/using-hidden-ui-in-the-cck2/


Jet Villegas: Setting up for Android and Firefox OS Development

Вторник, 25 Августа 2015 г. 19:03 + в цитатник

This post is a follow-up to an earlier article I wrote about setting up a FirefoxOS development environment.

I’m going to set up a Sony Z3C as the target device for Mobile OS software development. The Sony Z3C (also known as Aries or aosp_d5803 ) is a nice device for Mobile OS hacking as it’s an AOSP device with good support for building the OS binaries. I’ve set the phone up for both FirefoxOS and Android OS development, to compare and see what’s common across both environments.

Please note that if you got your Sony Z3C from the Mozilla Foxfooding program, then this article isn’t for you. Those phones are already flashed and automatically updated with specific FirefoxOS builds that Mozilla staff selected for your testing. Please don’t replace those builds unless you’re actively developing for these phones and have a device set aside for that purpose.

My development host is a Mac (OSX 10.10) laptop already set up to build the Firefox for Macintosh product. It’s also set up to build the Firefox OS binaries for the Flame device.

Most of the development environment for the Flame is also used for the Aries device. In particular, the case-sensitive disk partition is required for both FirefoxOS and Android OS development. You’ll want this partition to be at least 100GB in size if you want to build both operating systems. Set this up before downloading FirefoxOS or Android souce code to avoid ‘include file not found’ errors.

The next step to developing OS code for the Aries is to root the device. This will void your warranty, so tread carefully.

For most Gecko and Gaia developers, you’ll want to start from the base image for the Aries. The easiest way to flash your device with a known-good FirefoxOS build is to run flash.sh in the expanded aries.zip file from the official builds. You can then flash the phone with just Gecko or Gaia from your local source code.

The Aries binaries from a FirefoxOS build:

aries_firefoxos_images

The Aries binaries in an Android Lollipop build:

aries_android_images

If you want to build Android OS for the Aries, then read these docs from Sony, and these Mac-specific steps for building Android Lollipop. Note that the Android Lollipop SDK requires XCode 5.1.1 and Java 7 (JRE and JDK.) Both versions of XCode and Java are older than the latest versions available, so you’ll need to install the downgrades before building the Android OS.

When it comes time to configure your Android OS build via the lunch command, select aosp_d5803-userdebug as your device. Once the build is finished (after about 2 hours on my Mac,) use these commands to flash your phone with the Android OS you just built:

fastboot flash boot out/target/product/aries/boot.img
fastboot flash system out/target/product/aries/system.img
fastboot flash userdata out/target/product/aries/userdata.img

http://junglecode.net/setting-up-for-android-and-firefox-os-development/


Mozilla Thunderbird: Thunderbird and end-to-end email encryption – should this be a priority?

Вторник, 25 Августа 2015 г. 12:27 + в цитатник

In the last few weeks, I’ve had several interesting conversations concerning email encryption. I’m also trying to develop some concept of what areas Thunderbird should view as our special emphases as we look forward. The question is, with our limited resources, should we strive to make better support of end-to-end email encryption a vital Thunderbird priority? I’d appreciate comments on that question, either on this Thunderbird blog posting or the email list tb-planning@mozilla.org.

In one conversation, at the “Open Messaging Day” at OSCON 2015, I brought up the issue of whether, in a post-Snowden world, support for end-to-end encryption was important for emerging open messaging protocols such as JMAP. The overwhelming consensus was that this is a non-issue. “Anyone who can access your files using interception technology can more easily just grab your computer from your house. The loss of functionality in encryption (such as online search of your webmail, or loss of email content if certificates are lost) will give an unacceptable user experience to the vast majority of users” was the sense of the majority.

Woman In HandcuffsIn a second conversation, I was having dinner with a friend who works as a lawyer for a state agency involved in white-collar crime prosecution. This friend also thought the whole Snowden/NSA/metadata thing had been blown out of proportion, but for a very different reason. Paraphrasing my friend’s comments, “Our agency has enormous powers to subpoena all kinds of records – bank statements,  emails – and most organizations will silently hand them over to me without you ever knowing about it. We can always get metadata from email accounts and phones, e.g. e-mail addresses of people corresponded with, calls made, dates and times, etc. There is alot that other government employees (non NSA) have access to just by asking for it, so some of the outrage about the NSA’s power and specifically the lack of judicial oversight is misplaced and out of proportion precisely because the public is mostly ignorant about the scope of what is already available to the government.”

So in summary, the problem is much bigger than the average person realizes, and other email vendors don’t care about it.

There are several projects out there trying to make encryption a more realistic option. In order to change internet communications to make end-to-end encryption ubiquitous, any protocol proposal needs wide adoption by key players in the email world, particularly by client apps (as opposed to webmail solutions where the encryption problem is virtually intractable.) As Thunderbird is currently the dominant multi-platform open-source email client, we are sometimes approached by people in the privacy movement to cooperate with them in making email encryption simple and ubiquitous. Most recently, I’ve had some interesting conversations with Volker Birk of Pretty Easy Privacy about working with them.

Should this be a focus for Thunderbird development?

https://blog.mozilla.org/thunderbird/2015/08/thunderbird-and-end-to-end-email-encryption-should-this-be-a-priority/


Byron Jones: happy bmo push day!

Вторник, 25 Августа 2015 г. 09:25 + в цитатник

the following changes have been pushed to bugzilla.mozilla.org:

  • [1195362] Quicksearch error pages (“foo is not a field” and friends) should still fill in search into quicksearch box
  • [1190476] set Comment field in GPG email to the URL of the bug
  • [1195645] don’t create a new session for every authenticated REST/BzAPI call
  • [1197084] No mail sent when bugs added to or removed from *-core-security groups
  • [1196614] restrict the ability for users with editusers/creategroups to alter admins and the admin group
  • [1196092] Switch logincookies primary key to auto_incremented id, make cookie a secondary UNIQUE key
  • [1197699] always store the ip address in the logincookies table
  • [1197696] group_members report doesn’t display nested inherited groups
  • [1196134] add ability for admins to force a user to change their password on next login
  • [1192687] add the ability for users to view and revoke existing sessions
  • [1195836] Remove install-module.pl from bmo
  • [1180733] “An invalid state parameter was passed to the GitHub OAuth2 callback” error when logging in with github

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

https://globau.wordpress.com/2015/08/25/happy-bmo-push-day-157/


Christian Heilmann: Rock, Meats, JavaScript – BrazilJS 2015

Вторник, 25 Августа 2015 г. 04:09 + в цитатник

BrazilJS audience

I just got back from a 4 day trip to Brazil and back to attend BrazilJS. I was humbled and very happy to give the opening keynote seeing that the closing was meant to be by Brendan Eich and Andreas Gal – so, no pressure.

The keynote

In my keynote, I asked for more harmony in our community, and more ownership of the future of JavaScript by those who use it in production.

Keynote time

For quite some while now, I am confused as to who we are serving as browser makers, standards writers and library creators. All of the excellent solutions we have seem to fall through the cracks somewhere when you see what goes live.

That’s why I wanted to remind the audience that whatever amazing, inspiring and clever thing they’ll hear about at the conference is theirs to take to fruition. We have too much frustration in our market, and too much trying to one-up one another instead of trying to solve problems and making the solutions easily and readily available. The slides are on Slideshare, and a video will become available soon.

About Brazil

There are a few things to remember when you are going to Brazil:

  • When people are excited about something, they are really excited about it. There’s a lot of passion.
  • Personal space is as rare as an affordable flat in central London – people will affectionately touch strangers and there is a lot of body language. If that’s not your thing, make it obvious!
  • You will eat your body weight in amazing meat and food is a social gathering, not just fuel. Thus, bring some time.
  • Everybody will apologise for their bad English before having a perfectly comprehensible conversation with you
  • People of all ages and backgrounds are into heavy music (rock, metal, hardcore…)

About the event

VR ride about the history of JavaScript

BrazilJS was a ridiculous attempt at creating the biggest JavaScript event with 1,300 people. And it was a 100% success at that. I am fascinated by the professionalism, the venue, the AV setup and all the things that were done for speakers and attendees alike. Here are just a few things that happened:

  • There was a very strong message about diversity and a sensible and enforced code of conduct. This should not be a surprise, but when you consider Brazilian culture and reputation (think Carnival) it takes pride and conviction in those matters to stand up for them the way the organisers did.
  • The AV setup was huge and worked fine. There were no glitches in the audio and every presentation was live translated from English to Brazilian Portuguese and vice versa. The translation crew did a great job and we as presenters should do more to support them. I will write a post soon about this.
  • Wireless was flaky, but available when you needed it. It is pretty ridiculous to assume in a country where connectivity isn’t cheap and over a thousand people with two devices each try to connect that you’d have a good connection. As a presenter, I never rely on availability – neither should you.
  • There was always enough coffee, snacks and even a huge cake celebrating JavaScript (made by the mom of one of the organisers – the cake, not JavaScript)
  • The overall theme was geek – as geek as it can get. The organisers dressed up as power rangers, in between talks we saw animated 90s TV series, there as a Virtual Reality ride covering the history of JavaScript built with Arduinos and there were old-school arcade machines and consoles to play with.
  • It was a single track conference over two days with lots of high-class speakers and very interesting topics.
  • As a speaker, everything was organised for me. We all took a hired bus from and to the venue and we had lunch catered for us.
  • The conference also had a minority/diversity scholarship program where people who couldn’t afford to come got a sponsored ticket. These people weren’t grandstanded or shown up but just became a part of the crowd. I was lucky to chat to a few and learned quite a few things.
  • The after party was a big “foot in mouth” moment for me as I kept speaking out against bands at those. However, in Brazil and choosing a band that covers lots of rock anthems, it very much worked. I never thought I see an inclusive, non-aggressive mosh pit and people stage diving at a JavaScript event – I was wrong.

action shotMe, stagediving at the BrazilJS after party – photo by @orapouso

So, all I can say is thank you to everyone involved. This was a conference to remember and the enthusiasm of the people I met and talked to is a testament to how much this worked!

Personal/professional notes

BrazilJS was an interesting opportunity for me as I wanted to connect with my Microsoft colleagues in the country. I was amazed by how well-organised our participation was and loved the enthusiasm people had for us. Even when one of our other speakers couldn’t show up, we simply ran an impromptu Q&A on stage abut Edge. Instead of a sales booth we had technical evangelists at hand, who also helped translating. Quite a few people came to the booth to fix their web sites for Microsoft Edge’s standard compliant rendering. It’s fun to see when fixing things yields quick results.

Other short impressions:

  • I had no idea what a machine my colleague Jonathan Sampson is on stage. His talk in adventurous Portuguese had the audience in stitches and I was amazed by the well-structured content. I will pester him to re-record this in English.
  • Ju Goncalves (@cyberglot) gave a great, detailed talk about reduce(). If you are a conference organiser, check her out as a new Speaker() option – she is now based in Copenhagen.
  • It was fun to catch up with Laurie Voss after a few years (we worked in Yahoo together) and it was great of him to point to his LGBTQ Slack group inviting people to learn more about that facet of diversity in our community.
  • It warmed me to see the Mozilla Brazil community still kicking butt. Warm, affectionate and knowledgable people like the ones you could meet at the booth there are the reason why I became a Mozillian in the first place.

And that’s that

Organisers on stage

Thank you for everyone involved. Thank you to everybody asking me lots of technical questions and giving non-filtered feedback. Thank you for showing that a lot of geeks can also be very human and warm. Thank you for embracing someone who doesn’t speak your language. I met quite a few people I need to follow up with and I even had a BBQ at the family of two of the attendees I met before I went to my plane back home. You rock!

Always bet on JavaScript cake

http://christianheilmann.com/2015/08/25/rock-meats-javascript-braziljs-2015/


Hannah Kane: Vancouver Trip Summary

Вторник, 25 Августа 2015 г. 00:58 + в цитатник

I spent Thursday and Friday of last week with my lovely colleagues in Vancouver. Some things to note:

  • The Vancouver office is awesome, especially the art (h/t David Ascher’s wife)
  • Thanks to Jennie and the rest of the YVR team for making me feel welcome around the lunch table!
  • Luke promised to play guitar but he never did :(

Here’s how the two days went down:

  • Sabrina and I started off by having a morning meeting with Michelle via Vidyo. This produced several clarifying insights including the use of “portfolio” as the key metaphor for Clubs pages in the MLN Directory. This helped shaped our conversations during the rest of my visit.
  • Sabrina and I then reviewed what we already know about our audience, our programs and offerings, and value adds for the user.
  • We then sketched out a model for an engagement funnel

IMG_20150821_150739

    • Then we got to work on the MLN Directory model. We came up with streamlined sketches for the various content types, thinking in terms of mobile-first.
      • Member profile:
        • See field listing
        • Implied functionality: certain Leadership roles might be auto-applied (e.g. if the user owns an approved Club page, the system can apply the “Club Captain” role), while others might require an admin interface (e.g. Regional Coordinator, Hive Member). We’d like to allow for flexible Role names, to accommodate local flavor (i.e. Hive Chicago has specific role names they give to members).
      • Club and Hive pages:
        • Club page field listing
        • Hive page field listing
        • A key insight was that we should treat each distinct entity differently. That is, Club pages and Hive pages might be quite different, and we don’t need to try to force them into the same treatment. We also recognized that our MVP can simply address these two specific types of groups, since this is where our programs are focused.
        • We decided that focusing on Reporting for Clubs would be the highest value functionality, so we spec’ed out what that would look like (wireframes coming soon)
        • For Hive pages, we want to re-create the org listings and contact cards that the current Hive Directories have
  • We also met with Laura de Reynal and David Ascher to hash out plans for the audience research project. More on that soon, but you can see our “most important questions” at the top of this pad.
  • The issue of badges came up a few times. First, because we found that the plan for “Club Captain” and “Regional Coordinator” badges felt a little redundant given the concept of “roles.” Second, because we saw an opportunity to incentivize and reward participation by providing levels of badges (more like an “achievements” model). Seems like our colleagues were thinking along the same lines.

All in all, it was a really productive couple of days. We’ll be getting wireframes and then mockups out to various stakeholders over the next heartbeat, along with hashing out the technical issues with our engineering team.

Feel free to share any comments and questions.


http://hannahgrams.com/2015/08/24/vancouver-trip-summary/


Jim Chen: Recent Fennec platform changes

Понедельник, 24 Августа 2015 г. 23:10 + в цитатник

There has been a series of recent changes to the Fennec platform code (under widget/android). Most of the changes was refactoring in preparation for supporting multiple GeckoViews.

Currently, only one GeckoView is supported at a time in an Android app. This is the case for Fennec, where all tabs are shown within one GeckoView in the main activity. However, we'd like to eventually support having multiple GeckoView's at the same time, which would not only make GeckoView more usable and make more features possible, but also reduce a lot of technical debt that we have accumulated over the years.

The simplest way to support multiple GeckoViews is to open multiple nsWindows on the platform side, and associate each GeckoView with a new nsWindow. Right now, we open a new nsWindow in our command line handler (CLH) during startup, and never worry about having to open another window again. In fact, we quit Fennec by closing our only window. This assumption of having only one window will change for multiple GeckoView support.

Next, we needed a way of associating a Java GeckoView with a C++ nsWindow. For example, if a GeckoView sends a request to perform an operation, Gecko would need to know which nsWindow corresponds to that GeckoView. However, Java and platform would need to coordinate GeckoView and nsWindow creation somehow so that a match can be made.

Lastly, existing messaging systems would need to change. Over the years, GeckoAppShell has been the go-to place for platform-to-Java calls, and GeckoEvent has been the go-to for Java-to-platform calls. Over time, the two classes became a big mess of unrelated code stuffed together. Having multiple GeckoViews would make it even harder to maintain these two classes.

But there's hope! The recent refactoring introduced a new mechanism of implementing Java native methods using C++ class members 1). Using the new mechanism, calls on a Java object instance are automatically forwarded to calls on a C++ object instance, and everything in-between is auto-generated. This new mechanism provides a powerful tool to solve the problems mentioned above. Association between GeckoView and nsWindow is now a built-in part of the auto-generated code – a native call on a GeckoView instance can now be transparently forwarded to a call on an nsWindow instance, without writing extra code. In addition, events in GeckoEvent can now be implemented as native methods. For example, preference events can become native methods inside PrefHelper, and the goal is to eventually eliminate GeckoEvent altogether 2).

Effort is underway to move away from using the CLH to open nsWindows, which doesn't give an easy way to establish an association between a GeckoView and an nsWindow 3). Instead, nsWindow creation would move into a native method inside GeckoView that is called during GeckoView creation. As part of moving away from using the CLH, making a speculative connection was moved out of the CLH into its own native method inside GeckoThread 4). That also had the benefit of letting us make the speculative connection much earlier in the startup process.

This post provides some background on the on-going work in Fennec platform code. I plan to write another follow-up post that will include more of the technical details behind the new mechanism to implement native calls.

1) Bug 1178850 (Direct native Java method calls to C++ classes), bug 1186530 (Implement per-instance forwarding of native Java methods), bug 1187552 (Support direct ownership of C++ objects by Java objects), bug 1191083 (Add mechanism to handle native calls before Gecko is loaded), bug 1192043 (Add mechanism to proxy native calls to Gecko thread)
2) Bug1188959 ([meta] Convert GeckoEvent to native methods)
3) Bug 1197957 (Let GeckoView control nsWindow creation)
4) Bug 1195496 (Start speculative connection earlier in startup)

http://www.jnchen.com/blog/2015/08/recent-fennec-platform-changes


Air Mozilla: Chris Beard: Community Participation Guidelines

Понедельник, 24 Августа 2015 г. 23:03 + в цитатник

Chris Beard: Community Participation Guidelines Mozilla CEO Chris Bears talks about the Mozilla Project's Community Participation Guidelines in a recent Monday Project Meeting.

https://air.mozilla.org/chris-beard-community-participation-guidelines/


John O'Duinn: “we are all remote” at Cultivate NYC

Понедельник, 24 Августа 2015 г. 21:24 + в цитатник

It’s official!!

I’ll be speaking about remoties at the O’Reilly Cultivate conference in NYC!

Cultivate logoCultivate is being held on 28-29 Sept 2015, in the Javits conference center, in New York City. This is intentionally the same week, and same location, as the O’Reilly Strata+Hadoop World conference, so if you lead others in your organization, and are coming to Strata anyways, you should come a couple of days early to focus on cultivate-ing (!) your leadership skills. For more background on O’Reilly’s series of Cultivate conferences, check out this great post by Mike Loukides. I attended the Cultivate Portland conference last month, when it was co-located with OSCON, and found it insightful edge-of-my-seat stuff. I expect Cultivate NYC to be just as exciting.

Meanwhile, of course, I’m still writing like crazy on my book (and writing code when no-one is looking!), so have to run. As always, if you work remotely, or are part of a distributed team, I’d love to hear what does/doesn’t work for you and any wishes you have for topics to include in the book – just let me know.

Hope to see you in NYC next month.

John.
=====

http://oduinn.com/blog/2015/08/24/we-are-all-remote-at-cultivate-nyc/


Cameron Kaiser: Okay, you want WebExtensions API suggestions? Here's three.

Понедельник, 24 Августа 2015 г. 21:10 + в цитатник

Not to bring out the "lurkers support me in E-mail" argument but the public blog comments are rather different in opinion and tenor from the E-mail I got regarding our last post upon my supreme concern and displeasure over the eventual end of XPCOM/XUL add-ons. I'm not sure why that should be, but never let it be said that MoFo leadership doesn't stick to their (foot)guns.

With that in mind let me extend, as an author of a niche addon that I and a number of dedicated users employ regularly for legacy protocols, an attempt at an olive branch. Here's the tl;dr: I need a raw socket API, I need a protocol handler API, and I need some means of being able to dynamically write an document/data stream and hand it to the docshell. Are you willing?

When Mozilla decommissioned Gopher support in Firefox 4, the almost universal response was "this shouldn't be in core" and the followup was "if you want it, it should be an add-on, maintained by the community." So I did, and XPCOM let me do this. With OverbiteFF, Gopher menus (and through an analogous method whois and ph) are now first class citizens in Firefox. You can type a Gopher URL and it "just works." You can bookmark them. You can interact with them. They appear no differently than any other web page. I created XPCOM components for a protocol object and a channel object, and because they're XPCOM-based they interact with the docshell just like every other native core component in Necko.

More to the point, I didn't need anyone's permission to do it. I just created a component and loaded it, and it became as "native" as anything else in the browser. Now I need "permission." I need APIs to do what I could do all by myself beforehand.

What I worry is that Mozilla leadership is going to tick the top 10 addons or so off as working and call it a day, leaving me and other niche authors no way of getting ours to work. I don't think these three APIs are either technically unrealistic or lack substantial global applicability; they're foundational for getting new types of protocol access into the browser, not just old legacy ones. You can innovate nearly anything network-based with these three proposals.

So how about it? I know you're reading. Are you going to make good on your promises to us little guys, or are we just screwed?

http://tenfourfox.blogspot.com/2015/08/okay-you-want-webextensions-api.html


Air Mozilla: Mozilla Weekly Project Meeting

Понедельник, 24 Августа 2015 г. 21:00 + в цитатник

Robert Kaiser: Ending Development and Support for My Add-ons

Понедельник, 24 Августа 2015 г. 18:14 + в цитатник
This has been a long time coming, actually, and recent developments just put the final nail in the coffin.

I am ending all development and support for my "extension"-type add-ons effective immediately.

This affects (daily user numbers according to addons.mozilla.org):
If anyone is interested in taking over development and maintenance of any of those, please let me know and I'm happy to convert their repositories over to github for easier working with them, and and the new developer to their administration on AMO and/or move them over to you completely.

I will leave them listed on AMO for a little while so people who want to take over can take a look, but I will hide them from the site in the near future if nobody is interested.

The reasons for this step are multiple:

For one thing, I just don't have the time for updating their code or improving them. My job is stressful enough that my head is overflowing with Mozilla-related things all the time, and my employer is apparently not willing to give me any relief (in terms of hiring someone to supplement me) that would give me back sanity, so I need to remove some Mozilla- and software-related thing from my non-work time to gain back a little sanity so that I don't burn out.

I am also really sad that apparently nobody finds the time or energy to make decent managing and notification mechanisms available for UI code around the new-style web storage mechanisms like indexedDB, appCache, or ServiceWorkers caching, while we do have quite nice APIs there for long-standing things like cookies. For getting Tahoe Data Manager (which was my most interesting add-on) to work decently, I would have needed decent APIs there as well.

Then, my interest for experimenting with code has moved more and more away from the browser, which keeps changing around me all the time, and towards actual web development, where existing code doesn't get broken all the time and your code is more isolated. As a bonus, I can develop things that run on my (Firefox OS) phone and that I can show other people when I'm somewhere. And even there, I don't get as much time to dig into stuff as I would like to, see above.

And finally, and that's why this culminates right now, I disagree with some pieces of Mozilla's add-on strategies right now, and I don't want to be part of that as an add-on developer.
For one, I think add-on signing is a good idea in principle, but not enabling developers to test their code in any way in the same builds that users get is against everything I learned in terms of quality assurance. Then, requiring developers and other users of unbranded (or early pre-release) builds to turn off security for everything just to use/test one or two unsigned add-ons just feels plainly wrong to me (and don't tell me it can't be done otherwise, as I know there are perfectly good ways to solve this without undermining signing and preserving more safety). And I also fear that, while add-on signing brings a lot of pain to add-on developers and will make us lose some of them and their users, we will not reduce the malware/adware problem in the mid to long term, but rather make it worse, as they will resort to injecting binary DLLs into the Firefox process, which is the primary cause of startup crashes on updates, and I will have more grief in my actual job due to this, next to Firefox losing users that see those crashes.
And on the deprecation of "the permissive add-on model" as they call it in the post, I think that the Firefox UI being written in web (CSS/JS/HTML) or web-like (XUL) technologies and the ability to write add-ons that can use those to do anything in Firefox, including prototyping and inventing new functionality and UI paradigms, is the main thing that sets Firefox apart product-wise from all its competitors. If we take that away, there is no product reason for using Firefox over any other browser, the only reasons will be the philosophy behind Mozilla (which is what I'm signed up for anyhow)and the specific reflection of those in some internals of the browser, like respecting privacy and choice a little bit more than others - but most people consider that details, and it's hard to win them over with those. Don't get me wrong, I think that the WebExtensions API is a great idea (and it would be awesome to standardize some bits of it across browsers), and add-ons being sandboxed by default is long overdue. But we also would need to require less signing and review for add-ons that are confined to the safe APIs provided there, and I think we'd still, with heavy review, signing, and whatnot, need to allow people to go fully into the guts of Firefox, with full permissions, to provide the basis for the really ground-breaking add-ons that set us apart from the rest of the world. Even though almost all of the code of my add-ons ran within their own browser tab, they required a good reach into high-permission areas, which probably the new WebExtensions API will not allow that way. But I also do not even have the time to investigate how I could adapt my add-ons to any of this, so I decided to better pull the plug right now.

So, all in all, I probably have waited too long with this anyhow, mostly because I really like Tahoe Data Manager, but I just can't go on pretending that I will still develop or even maintain those add-ons.

Again, if anyone is interested in taking over, either fully or with a few patches here and there, please contact me and I'll help to make it happen.

(Note that this does not affect my language packs, dictionaries, or themes at this point, I'm continuing to maintain and develop them, at least for now.)

http://home.kairo.at/blog/2015-08/ending_development_and_support_for_my_ad


QMO: Firefox 41 Beta 3 Testday Results

Понедельник, 24 Августа 2015 г. 14:47 + в цитатник

Hello Mozillians!

As you may already know, last Friday – August 21st – we held a new Testday event, for Firefox 41 Beta 3.

Results:

We’d like to take this opportunity to thank alex_mayorga, Bolaram Paul, Chandrakant Dhutadmal, Luna Jernberg, Moin Shaikh, gaby 2300 and the Bangladesh QA Community: Hossain Al Ikram, Rezaul Huque NayeemNazir Ahmed SabbirForhad HossainMd. Rahimul IslamSajib Raihan RussellRakibul Islam RatulSaheda Reza Antora,  Sunny, Mohammad Maruf Islam for getting involved in this event and making Firefox as best as it could be.

Also a big thank you goes to all our active moderators.

Keep an eye on QMO for upcoming events! 

https://quality.mozilla.org/2015/08/firefox-41-beta-3-testday-results/


Emily Dunham: X240 trackpoint speed

Понедельник, 24 Августа 2015 г. 10:00 + в цитатник

X240 trackpoint speed

The screen on my X1 Carbon gave out after a couple months, and my loaner laptop in the meantime is an X240.

The worst thing about this laptop is how slowly the trackpoint moves with a default Ubuntu installation. However, it’s fixable:

cat /sys/devices/platform/i8042/serio1/serio2/speed
cat /sys/devices/platform/i8042/serio1/serio2/sensitivity

Note the starting values in case anything goes wrong, then fiddle around:

echo 255 | sudo tee /sys/devices/platform/i8042/serio1/serio2/sensitivity
echo 255 | sudo tee /sys/devices/platform/i8042/serio1/serio2/speed

Some binary search themed prodding and a lot of tee: /sys/devices/platform/i8042/serio1/serio2/sensitivity: Numerical result out of range has confirmed that both files accept values between 0-255. Interestingly, setting them to 0 does not seem to disable the trackpoint completely.

If you’re wondering why the configuration settings look like ordinary files but choke on values bigger or smaller than a short, go read about sysfs.

http://edunham.net/2015/08/24/x240_trackpoint_speed.html


This Week In Rust: This Week in Rust 93

Понедельник, 24 Августа 2015 г. 07:00 + в цитатник

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

From the Blogosphere

New Releases & Project Updates

What's cooking on nightly?

86 pull requests were merged in the last week.

New Contributors

  • jotomicron
  • Marc-Antoine Perennou
  • Martin Wernstal

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

fn work(on: RustProject) -> Money

No jobs listed for this week. Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

"unsafe is as viral and pernicious as pop music, though obviously not as dangerous."Daniel Keep at TRPLF.

Thanks to llogiq for the tip. Submit your quotes for next week!.

http://this-week-in-rust.org/blog/2015/08/24/this-week-in-rust-93/


Wladimir Palant: Missing a rationale for WebExtensions

Понедельник, 24 Августа 2015 г. 01:40 + в цитатник

Mozilla’s announcement to deprecate XUL/XPCOM-based add-ons raises many questions. Seeing the reactions, it seems that most people are very confused now. I mean, I see where this is coming from. XUL and XPCOM have become a burden, they come at a huge memory/performance/maintenance cost, impose significant limitations on browser development and create the danger that a badly written extension breaks everything. Whatever comes to replace them certainly won’t give add-on developers the same flexibility however, especially when it comes to extending the user interface. This is sad but I guess that it has to be done.

What confuses me immensely however is WebExtensions which are touted as the big new thing. My experience with Chrome APIs isn’t all too positive, the possibilities here are very limited and there is a fair number of inconsistencies. The documentation isn’t great either, there are often undocumented details that you only hit when trying things out. This isn’t very surprising of course: the API has grown along with Chrome itself, and many of the newer concepts simply didn’t exist back when the first interfaces were defined. Even worse: Opera, despite using the same engine, occasionally implements the same APIs differently.

So my main question is: does Mozilla really plan to keep bug-for-bug compatibility with Chrome APIs all for the goal of cross-browser extensions? That’s an endless catch-up game that benefits Chrome way more than it helps Firefox. And what is this cross-browser story still worth once Mozilla starts adding their own interfaces which are incompatible to Chrome? Don’t forget that Chrome can add new APIs as well, maybe even for the same purpose as Mozilla but with a different interface.

Further, I don’t see any advantages of WebExtensions over the Add-on SDK. I wasn’t a fan of the SDK back when it was introduced but I must say that it really matured over the years. It took much time for the SDK to become a modern, consistent and flexible framework that it is right now. Mozilla invested significant effort into it and it paid off. What’s even more important, there is now sufficient documentation and examples for it on the web. Why throw all this away for a completely different framework? Note that the announcement says that most SDK-based extension will continue to work in the new world but according to the comments below it won’t be a development focus any more — from my experience, that’s an euphemism for “deprecated.”

As mentioned above, I don’t see how theoretical cross-browser compatibility is going to benefit Firefox. Maybe the advantage lies in the permission model? But Chrome’s permission model is broken — most extensions need APIs that could potentially allow them to access user’s browsing history. While most extensions won’t actually do anything like that, privacy violations are a very common issue with Chrome extensions. The privilege prompt doesn’t help users recognize whether there is a problem, it merely trains users to ignore privacy-related warnings. Oh, and it shifts the blame from Google to the user — the user has been warned, right?

In my opinion, this permission model cannot be seen as a substitute for reviews. Nor will it make reviews easier than for SDK-based extensions: it’s pretty easy to extract the list of SDK modules uses by an extension. That gives one pretty much the same information as a list of permissions, albeit without requiring an explicit list.

There must be a reason why Mozilla chose to develop WebExtension rather than focus that energy on improving the Add-on SDK. Sadly, I haven’t seen it mentioned anywhere so far.

https://palant.de/2015/08/23/missing-a-rationale-for-webextensions


Robert O'Callahan: Hooray For WebExtensions

Суббота, 22 Августа 2015 г. 14:14 + в цитатник

Many of the comments on WebExtensions miss an important point: basing WebExtensions on the Chrome extension API does not imply limiting WebExtensions to the Chrome API. For example, Bill already made it clear that we want to support addons like Tree Style Tabs and NoScript, which Chrome intentionally does not support. So Firefox addons will continue to be able to do things you can't do in Chrome (though there will be some things you can hack into Firefox's XUL today that won't be supported by WebExtensions, for sure).

WebExtensions is something we need to do and should have done many years ago, before Chrome even existed. It's what Jetpack should have been. But better late than never!

http://robert.ocallahan.org/2015/08/hooray-for-webextensions.html


Giorgio Maone: WebExtensions API & NoScript

Суббота, 22 Августа 2015 г. 10:48 + в цитатник

Many of you have read a certain announcement about the future of Firefox's add-ons and are worried about some extensions, including NoScript, being deeply rooted into those Mozilla's core technologies, such as XPCOM and XUL, which are going to be deprecated.

Developers and users are also concerned about add-ons being prevented from exploring radically new concepts which would require those "super powers" apparently taken away by the WebExtensions API.

I'd like to reassure them: Mozilla is investing a lot of resources to ensure that complex and innovative extensions can prosper also in the new Web-centric ecosystem. In fact, as mentioned by Bill McCloskey, at this moment I'm working within Mozilla's Electrolysis team and with other add-on authors, involved in the design of mechanisms and processes helping developers experiment in directions not supported yet by the "official" the WebExtensions API, which is going to be augmented and shaped around their needs and with their contributions.

I've just published a proposal, tentatively called native.js, to "embrace & extend" the WebExtensions API: all the interested parties are invited to discuss it on discourse.mozilla-community.org.

https://hackademix.net/2015/08/22/webextensions-api-noscript/


Alex Vincent: My two cents on WebExtensions, XPCOM/XUL and other announcements

Суббота, 22 Августа 2015 г. 10:02 + в цитатник

(tl;dr:  There’s a lot going on, and I have some sage, if painful, advice for those who think Mozilla is just ruining your ability to do what you do.  But this advice is worth exactly what you pay to read it.  If you don’t care about a deeper discussion, just move to the next article.)

 

The last few weeks on Planet Mozilla have had some interesting moments:  great, good, bad, and ugly.  Honestly, all the recent traffic has impacts on me professionally, both present and future, so I’m going to respond very cautiously here.  Please forgive the piling on – and understand that I’m not entirely opposed to the most controversial piece.

  • WebAssembly.  It just so happens I’m taking an assembly language course now at Chabot College.  So I want to hear more about this.  I don’t think anyone’s going to complain much about faster JavaScript execution… until someone finds a way to break out of the .wasm sandboxing, of course.  I really want to be a part of that.
  • ECMAScript 6th Edition versus the current Web:  I’m looking forward to Christian Heilmann’s revised thoughts on the subject.  On my pet projects, I find the new features of ECMAScript 6 gloriously fun to use, and I hate working with JS that doesn’t fully support it.  (CoffeeScript, are you listening?)
  • WebDriver:  Professionally I have a very high interest in this.  I think three of the companies I’ve worked for, including FileThis (my current employer), could benefit from participating in the development of the WebDriver spec.  I need to get involved in this.
  • Electrolysis:  I think in general it’s a good thing.  Right now when one webpage misbehaves, it can affect the whole Firefox instance that’s running.
  • Scripts as modules:  I love .jsm’s, and I see in relevant bugs that some consensus on ECMAScript 6-based modules is starting to really come together.  Long overdue, but there’s definitely traction, and it’s worth watching.
  • Pocket in Firefox:  I haven’t used it, and I’m not interested.  As for it being a “surprise”:  I’ll come back to that in a moment.
  • Rust and Servo:  Congratulations on Rust reaching 1.0 – that’s a pretty big milestone.  I haven’t had enough time to take a deep look at it.  Ditto Servo.  It must be nice having a team dedicated to researching and developing new ideas like this, without a specific business goal.  I’m envious.  :-)
  • Developer Tools:  My apologies for nagging too much about one particular bug that really hurts us at FileThis, but I do understand there’s a lot of other important work to be done.  If I understood how the devtools protocols worked, I could try to fix the bug myself.  I wish I could have a live video chat with the right people there, or some reference OGG videos, to help out… but videos would quickly become obsolete documentation.
  • WebExtensions, XPCOM and XULUh oh.

First of all, I’m more focused on running custom XUL apps via firefox -app than I am on extensions to base-line Firefox.  I read the announcement about this very, very carefully.  I note that there was no mention of XUL applications being affected, only XUL-based add-ons.  The headline said “Deprecration of XUL, XPCOM…” but the text makes it clear that this applies mostly to add-ons.  So for the moment, I can live with it.

Mozilla’s staff has been sending mixed messages, though.  On the one hand, we’re finally getting a Firefox-based SDK into regular production. (Sorry, guys, I really wish I could have driven that to completion.)  On the other, XUL development itself is considered dead – no new features will be added to the language, as I found to my dismay when a XUL tree bug I’d been interested in was WONTFIX’ed.  Ditto XBL, and possibly XPCOM itself.  In other words, what I’ve specialized in for the last dozen years is becoming obsolete knowledge.

I mean, I get it:  the Web has to evolve, and so do the user-agents (note I didn’t say “browsers”, deliberately) that deliver it to human beings have to evolve too.  It’s a brutal Darwinian process of not just technologies, but ideas:  what works, spreads – and what’s hard for average people (or developers) to work with, dies off.

But here’s the thing:  Mozilla, Google, Microsoft, and Opera all have huge customer bases to serve with their browser products, and their customer bases aren’t necessarily the same as yours or mine (other developers, other businesses).  In one sense we should be grateful that all these ideas are being tried out.  In another, it’s really hard for third-parties like FileThis or TenFourFox or NoScript or Disruptive Innovations, who have much less resources and different business goals, to keep up with that brutally fast Darwinian pace these major companies have set for themselves.  (They say it’s for their customers, and they’re probably right, but we’re coughing on the dust trails they kick up.)  Switching to an “extended support release” branch only gives you a longer stability cycle… for a while, anyway, and then you’re back in catch-up mode.

A browser for the World Wide Web is a complex beast to build and maintain, and growing more so every year.  That’s because in the mad scramble to provide better services for Web end-users, they add new technologies and new ideas rapidly, but they also retire “undesirable” technologies.  Maybe not so rapidly – I do feel sympathy for those who complain about CSS prefixes being abused in the wild, for example – but the core products of these browser providers do eventually move on from what, in their collective opinions, just isn’t worth supporting anymore.

So what do you do if you’re building a third-party product that relies on Mozilla Firefox supporting something that’s fallen out of favor?

Well, obviously, the first thing you do is complain on your weblog that gets syndicated to Planet Mozilla.  That’s what I’m doing, isn’t it?  :-)

Ultimately, though, you have to own the code.  I’m going to speak very carefully here.

In economic terms, we web developers deal with an oligopoly of web browser vendors:  a very small but dominant set of players in the web browsing “market”.  They spend vast resources building, maintaining and supporting their products and largely give them away for free.  In theory the barriers to entry are small, especially for Webkit-based browsers and Gecko:  download the source, customize it, build and deploy.

In practice… maintenance of these products is extremely difficult.  If there’s a bug in NSS or the browser devtools, I’m not the best person to fix it.  But I’m the Mozilla expert where I work, and usually have been.

I think it isn’t a stretch to say that web browsers, because of the sheer number of features needed to satisfy the average end-user, rapidly approach the complexity of a full-blown operating system.  That’s right:  Firefox is your operating system for accessing the Web.  Or Chrome is.  Or Opera, or Safari.  It’s not just HTML, CSS and JavaScript anymore:  it’s audio, video, security, debuggers, automatic updates, add-ons that are mini-programs in their own right, canvases, multithreading, just-in-time compilation, support for mobile devices, animations, et cetera.  Plus the standards, which are also evolving at high frequencies.

My point in all this is as I said above:  we third party developers have to own the code, even code bases far too large for us to properly own anymore.  What do I mean by ownership?  Some would say, “deal with it as best you can”.  Some would say, “Oh yeah? Fork you!”  Someone truly crazy (me) would say, “consider what it would take to build your own.”

I mean that.  Really.  I don’t mean “build your own.”  I mean, “consider what you would require to do this independently of the big browser vendors.”

If that thought – building something that fits your needs and is complex enough to satisfy your audience of web end-users, who are accustomed to what Mozilla Firefox or Google Chrome or Microsoft Edge, etc., provide them already, complete with back-end support infrastructure to make it seamlessly work 99.999% of the time – scares you, then congratulations:  you’re aware of your limited lifespan and time available to spend on such a project.

For what it’s worth, I am considering such an idea.  For the future, when it comes time to build my own company around my own ideas.  That idea scares the heck out of me.  But I’m still thinking about it.

Just like reading this article, when it comes to building your products, you get what you pay for.  Or more accurately, you only own what you’re paying for.  The rest of it… that’s a side effect of the business or industry you’re working in, and you’re not in control of these external factors you subconsciously rely on.

Bottom line:  browser vendors are out to serve their customer bases, which are tens of millions, if not hundreds of millions of people in size.  How much of the code, of the product, that you are complaining about do you truly own?  How much of it do you understand and can support on your own?  The chances are, you’re relying on benevolent dictators in this oligopoly of web browsers.

It’s not a bad thing, except when their interests don’t align with yours as a developer.  Then it’s merely an inconvenience… for you.  How much of an inconvenience?  Only you can determine that.

Then you can write a long diatribe for Planet Mozilla about how much this hurts you.

https://alexvincent.us/blog/?p=885



Поиск сообщений в rss_planet_mozilla
Страницы: 472 ... 189 188 [187] 186 185 ..
.. 1 Календарь