Giorgio Maone: WebExtensions FAQ |
WebExtensions are making some people happy, some people angry, many people ask questions.
Some of the answers can be found here, more to come as add-on developers keep discussing this hot topic.
My favourite one: No, your add-ons' ability and your own creativity won't be limited by the new API.
|
Michael Kaply: Using Hidden UI in the CCK2 |
One of the questions I get asked the most is how to hide certain UI elements of Firefox. I implemented the Hidden UI feature of the CCK2 specifically to address this problem. Using it can be a little daunting, though, so I wanted to take some time to give folks the basics.
The Hidden UI feature relies on CSS selectors. We can use CSS Selectors to specify any element in the Firefox user interface and then that element will be hidden. The trick is figuring out the selectors. To accomplish this, my primary tool is the DOM Inspector. With the DOM Inspector, I can look at any element in the Firefox user interface and determine it's ID. Once I have it's ID, I can usually specify the CSS selector as #ID
and I can hide that element. Let's walk through using the DOM Inspector to figure out the ID of the home button.
This gives us something unique we can use - an ID. So #home-button
in Hidden UI will hide the home button.
You can use this method for just about every aspect of the Firefox UI except for menus and the Australis panel. For these items, I turn to the Firefox source code.
If you want to hide anything on the Australis panel, you can look for IDs here. If you want to hide anything on the Firefox context menu, you can look here. If you want to hide anything in the menu bar, you can look here.
As a last resort, you can simply hide menuitems based on their text. For instance, if you wanted to hide the Customize menu that appears when you right click on a toolbar, you could specify a selector of menuitem[label^='Customize]
. This says "Hide any menu item that begins with the word Customize." Don't try to include the ellipsis in your selector because in most cases it's not ..., it's the unicode ellipsis (…). (Incidentally, that menu is defined here, along with the rest of the toolbar popup menu. Because it doesn't have an ID, you'll have to use menuitem.viewCustomizeToolbar
.)
Hopefully this should get everyone started. If there's something you can't figure out how to hide, let me know. And if you're trying to hide everything, you should probably be looking at a kiosk solution, not the CCK2...
https://mike.kaply.com/2015/08/25/using-hidden-ui-in-the-cck2/
|
Jet Villegas: Setting up for Android and Firefox OS Development |
This post is a follow-up to an earlier article I wrote about setting up a FirefoxOS development environment.
I’m going to set up a Sony Z3C as the target device for Mobile OS software development. The Sony Z3C (also known as Aries or aosp_d5803 ) is a nice device for Mobile OS hacking as it’s an AOSP device with good support for building the OS binaries. I’ve set the phone up for both FirefoxOS and Android OS development, to compare and see what’s common across both environments.
Please note that if you got your Sony Z3C from the Mozilla Foxfooding program, then this article isn’t for you. Those phones are already flashed and automatically updated with specific FirefoxOS builds that Mozilla staff selected for your testing. Please don’t replace those builds unless you’re actively developing for these phones and have a device set aside for that purpose.
My development host is a Mac (OSX 10.10) laptop already set up to build the Firefox for Macintosh product. It’s also set up to build the Firefox OS binaries for the Flame device.
Most of the development environment for the Flame is also used for the Aries device. In particular, the case-sensitive disk partition is required for both FirefoxOS and Android OS development. You’ll want this partition to be at least 100GB in size if you want to build both operating systems. Set this up before downloading FirefoxOS or Android souce code to avoid ‘include file not found’ errors.
The next step to developing OS code for the Aries is to root the device. This will void your warranty, so tread carefully.
For most Gecko and Gaia developers, you’ll want to start from the base image for the Aries. The easiest way to flash your device with a known-good FirefoxOS build is to run flash.sh in the expanded aries.zip file from the official builds. You can then flash the phone with just Gecko or Gaia from your local source code.
The Aries binaries from a FirefoxOS build:
The Aries binaries in an Android Lollipop build:
If you want to build Android OS for the Aries, then read these docs from Sony, and these Mac-specific steps for building Android Lollipop. Note that the Android Lollipop SDK requires XCode 5.1.1 and Java 7 (JRE and JDK.) Both versions of XCode and Java are older than the latest versions available, so you’ll need to install the downgrades before building the Android OS.
When it comes time to configure your Android OS build via the lunch command, select aosp_d5803-userdebug as your device. Once the build is finished (after about 2 hours on my Mac,) use these commands to flash your phone with the Android OS you just built:
fastboot flash boot out/target/product/aries/boot.img
fastboot flash system out/target/product/aries/system.img
fastboot flash userdata out/target/product/aries/userdata.img
http://junglecode.net/setting-up-for-android-and-firefox-os-development/
|
Mozilla Thunderbird: Thunderbird and end-to-end email encryption – should this be a priority? |
In the last few weeks, I’ve had several interesting conversations concerning email encryption. I’m also trying to develop some concept of what areas Thunderbird should view as our special emphases as we look forward. The question is, with our limited resources, should we strive to make better support of end-to-end email encryption a vital Thunderbird priority? I’d appreciate comments on that question, either on this Thunderbird blog posting or the email list tb-planning@mozilla.org.
In one conversation, at the “Open Messaging Day” at OSCON 2015, I brought up the issue of whether, in a post-Snowden world, support for end-to-end encryption was important for emerging open messaging protocols such as JMAP. The overwhelming consensus was that this is a non-issue. “Anyone who can access your files using interception technology can more easily just grab your computer from your house. The loss of functionality in encryption (such as online search of your webmail, or loss of email content if certificates are lost) will give an unacceptable user experience to the vast majority of users” was the sense of the majority.
In a second conversation, I was having dinner with a friend who works as a lawyer for a state agency involved in white-collar crime prosecution. This friend also thought the whole Snowden/NSA/metadata thing had been blown out of proportion, but for a very different reason. Paraphrasing my friend’s comments, “Our agency has enormous powers to subpoena all kinds of records – bank statements, emails – and most organizations will silently hand them over to me without you ever knowing about it. We can always get metadata from email accounts and phones, e.g. e-mail addresses of people corresponded with, calls made, dates and times, etc. There is alot that other government employees (non NSA) have access to just by asking for it, so some of the outrage about the NSA’s power and specifically the lack of judicial oversight is misplaced and out of proportion precisely because the public is mostly ignorant about the scope of what is already available to the government.”
So in summary, the problem is much bigger than the average person realizes, and other email vendors don’t care about it.
There are several projects out there trying to make encryption a more realistic option. In order to change internet communications to make end-to-end encryption ubiquitous, any protocol proposal needs wide adoption by key players in the email world, particularly by client apps (as opposed to webmail solutions where the encryption problem is virtually intractable.) As Thunderbird is currently the dominant multi-platform open-source email client, we are sometimes approached by people in the privacy movement to cooperate with them in making email encryption simple and ubiquitous. Most recently, I’ve had some interesting conversations with Volker Birk of Pretty Easy Privacy about working with them.
Should this be a focus for Thunderbird development?
|
Byron Jones: happy bmo push day! |
the following changes have been pushed to bugzilla.mozilla.org:
discuss these changes on mozilla.tools.bmo.
https://globau.wordpress.com/2015/08/25/happy-bmo-push-day-157/
|
Christian Heilmann: Rock, Meats, JavaScript – BrazilJS 2015 |
I just got back from a 4 day trip to Brazil and back to attend BrazilJS. I was humbled and very happy to give the opening keynote seeing that the closing was meant to be by Brendan Eich and Andreas Gal – so, no pressure.
In my keynote, I asked for more harmony in our community, and more ownership of the future of JavaScript by those who use it in production.
For quite some while now, I am confused as to who we are serving as browser makers, standards writers and library creators. All of the excellent solutions we have seem to fall through the cracks somewhere when you see what goes live.
That’s why I wanted to remind the audience that whatever amazing, inspiring and clever thing they’ll hear about at the conference is theirs to take to fruition. We have too much frustration in our market, and too much trying to one-up one another instead of trying to solve problems and making the solutions easily and readily available. The slides are on Slideshare, and a video will become available soon.
There are a few things to remember when you are going to Brazil:
BrazilJS was a ridiculous attempt at creating the biggest JavaScript event with 1,300 people. And it was a 100% success at that. I am fascinated by the professionalism, the venue, the AV setup and all the things that were done for speakers and attendees alike. Here are just a few things that happened:
So, all I can say is thank you to everyone involved. This was a conference to remember and the enthusiasm of the people I met and talked to is a testament to how much this worked!
BrazilJS was an interesting opportunity for me as I wanted to connect with my Microsoft colleagues in the country. I was amazed by how well-organised our participation was and loved the enthusiasm people had for us. Even when one of our other speakers couldn’t show up, we simply ran an impromptu Q&A on stage abut Edge. Instead of a sales booth we had technical evangelists at hand, who also helped translating. Quite a few people came to the booth to fix their web sites for Microsoft Edge’s standard compliant rendering. It’s fun to see when fixing things yields quick results.
Other short impressions:
Thank you for everyone involved. Thank you to everybody asking me lots of technical questions and giving non-filtered feedback. Thank you for showing that a lot of geeks can also be very human and warm. Thank you for embracing someone who doesn’t speak your language. I met quite a few people I need to follow up with and I even had a BBQ at the family of two of the attendees I met before I went to my plane back home. You rock!
http://christianheilmann.com/2015/08/25/rock-meats-javascript-braziljs-2015/
|
Hannah Kane: Vancouver Trip Summary |
I spent Thursday and Friday of last week with my lovely colleagues in Vancouver. Some things to note:
Here’s how the two days went down:
All in all, it was a really productive couple of days. We’ll be getting wireframes and then mockups out to various stakeholders over the next heartbeat, along with hashing out the technical issues with our engineering team.
Feel free to share any comments and questions.
|
Jim Chen: Recent Fennec platform changes |
There has been a series of recent changes to the Fennec platform code (under widget/android). Most of the changes was refactoring in preparation for supporting multiple GeckoViews.
Currently, only one GeckoView is supported at a time in an Android app. This is the case for Fennec, where all tabs are shown within one GeckoView in the main activity. However, we'd like to eventually support having multiple GeckoView's at the same time, which would not only make GeckoView more usable and make more features possible, but also reduce a lot of technical debt that we have accumulated over the years.
The simplest way to support multiple GeckoViews is to open multiple nsWindows on the platform side, and associate each GeckoView with a new nsWindow. Right now, we open a new nsWindow in our command line handler (CLH) during startup, and never worry about having to open another window again. In fact, we quit Fennec by closing our only window. This assumption of having only one window will change for multiple GeckoView support.
Next, we needed a way of associating a Java GeckoView with a C++ nsWindow. For example, if a GeckoView sends a request to perform an operation, Gecko would need to know which nsWindow corresponds to that GeckoView. However, Java and platform would need to coordinate GeckoView and nsWindow creation somehow so that a match can be made.
Lastly, existing messaging systems would need to change. Over the years, GeckoAppShell has been the go-to place for platform-to-Java calls, and GeckoEvent has been the go-to for Java-to-platform calls. Over time, the two classes became a big mess of unrelated code stuffed together. Having multiple GeckoViews would make it even harder to maintain these two classes.
But there's hope! The recent refactoring introduced a new mechanism of implementing Java native methods using C++ class members 1). Using the new mechanism, calls on a Java object instance are automatically forwarded to calls on a C++ object instance, and everything in-between is auto-generated. This new mechanism provides a powerful tool to solve the problems mentioned above. Association between GeckoView and nsWindow is now a built-in part of the auto-generated code – a native call on a GeckoView instance can now be transparently forwarded to a call on an nsWindow instance, without writing extra code. In addition, events in GeckoEvent can now be implemented as native methods. For example, preference events can become native methods inside PrefHelper, and the goal is to eventually eliminate GeckoEvent altogether 2).
Effort is underway to move away from using the CLH to open nsWindows, which doesn't give an easy way to establish an association between a GeckoView and an nsWindow 3). Instead, nsWindow creation would move into a native method inside GeckoView that is called during GeckoView creation. As part of moving away from using the CLH, making a speculative connection was moved out of the CLH into its own native method inside GeckoThread 4). That also had the benefit of letting us make the speculative connection much earlier in the startup process.
This post provides some background on the on-going work in Fennec platform code. I plan to write another follow-up post that will include more of the technical details behind the new mechanism to implement native calls.
http://www.jnchen.com/blog/2015/08/recent-fennec-platform-changes
|
Air Mozilla: Chris Beard: Community Participation Guidelines |
Mozilla CEO Chris Bears talks about the Mozilla Project's Community Participation Guidelines in a recent Monday Project Meeting.
https://air.mozilla.org/chris-beard-community-participation-guidelines/
|
John O'Duinn: “we are all remote” at Cultivate NYC |
It’s official!!
I’ll be speaking about remoties at the O’Reilly Cultivate conference in NYC!
Cultivate is being held on 28-29 Sept 2015, in the Javits conference center, in New York City. This is intentionally the same week, and same location, as the O’Reilly Strata+Hadoop World conference, so if you lead others in your organization, and are coming to Strata anyways, you should come a couple of days early to focus on cultivate-ing (!) your leadership skills. For more background on O’Reilly’s series of Cultivate conferences, check out this great post by Mike Loukides. I attended the Cultivate Portland conference last month, when it was co-located with OSCON, and found it insightful edge-of-my-seat stuff. I expect Cultivate NYC to be just as exciting.
Meanwhile, of course, I’m still writing like crazy on my book (and writing code when no-one is looking!), so have to run. As always, if you work remotely, or are part of a distributed team, I’d love to hear what does/doesn’t work for you and any wishes you have for topics to include in the book – just let me know.
Hope to see you in NYC next month.
John.
=====
http://oduinn.com/blog/2015/08/24/we-are-all-remote-at-cultivate-nyc/
|
Cameron Kaiser: Okay, you want WebExtensions API suggestions? Here's three. |
Not to bring out the "lurkers support me in E-mail" argument but the public blog comments are rather different in opinion and tenor from the E-mail I got regarding our last post upon my supreme concern and displeasure over the eventual end of XPCOM/XUL add-ons. I'm not sure why that should be, but never let it be said that MoFo leadership doesn't stick to their (foot)guns.
With that in mind let me extend, as an author of a niche addon that I and a number of dedicated users employ regularly for legacy protocols, an attempt at an olive branch. Here's the tl;dr: I need a raw socket API, I need a protocol handler API, and I need some means of being able to dynamically write an document/data stream and hand it to the docshell. Are you willing?
When Mozilla decommissioned Gopher support in Firefox 4, the almost universal response was "this shouldn't be in core" and the followup was "if you want it, it should be an add-on, maintained by the community." So I did, and XPCOM let me do this. With OverbiteFF, Gopher menus (and through an analogous method whois and ph) are now first class citizens in Firefox. You can type a Gopher URL and it "just works." You can bookmark them. You can interact with them. They appear no differently than any other web page. I created XPCOM components for a protocol object and a channel object, and because they're XPCOM-based they interact with the docshell just like every other native core component in Necko.
More to the point, I didn't need anyone's permission to do it. I just created a component and loaded it, and it became as "native" as anything else in the browser. Now I need "permission." I need APIs to do what I could do all by myself beforehand.
What I worry is that Mozilla leadership is going to tick the top 10 addons or so off as working and call it a day, leaving me and other niche authors no way of getting ours to work. I don't think these three APIs are either technically unrealistic or lack substantial global applicability; they're foundational for getting new types of protocol access into the browser, not just old legacy ones. You can innovate nearly anything network-based with these three proposals.
So how about it? I know you're reading. Are you going to make good on your promises to us little guys, or are we just screwed?
http://tenfourfox.blogspot.com/2015/08/okay-you-want-webextensions-api.html
|
Air Mozilla: Mozilla Weekly Project Meeting |
The Monday Project Meeting
https://air.mozilla.org/mozilla-weekly-project-meeting-20150824/
|
Robert Kaiser: Ending Development and Support for My Add-ons |
http://home.kairo.at/blog/2015-08/ending_development_and_support_for_my_ad
|
QMO: Firefox 41 Beta 3 Testday Results |
Hello Mozillians!
As you may already know, last Friday – August 21st – we held a new Testday event, for Firefox 41 Beta 3.
Results:
We’d like to take this opportunity to thank alex_mayorga, Bolaram Paul, Chandrakant Dhutadmal, Luna Jernberg, Moin Shaikh, gaby 2300 and the Bangladesh QA Community: Hossain Al Ikram, Rezaul Huque Nayeem, Nazir Ahmed Sabbir, Forhad Hossain, Md. Rahimul Islam, Sajib Raihan Russell, Rakibul Islam Ratul, Saheda Reza Antora, Sunny, Mohammad Maruf Islam for getting involved in this event and making Firefox as best as it could be.
Also a big thank you goes to all our active moderators.
Keep an eye on QMO for upcoming events!
https://quality.mozilla.org/2015/08/firefox-41-beta-3-testday-results/
|
Emily Dunham: X240 trackpoint speed |
The screen on my X1 Carbon gave out after a couple months, and my loaner laptop in the meantime is an X240.
The worst thing about this laptop is how slowly the trackpoint moves with a default Ubuntu installation. However, it’s fixable:
cat /sys/devices/platform/i8042/serio1/serio2/speed cat /sys/devices/platform/i8042/serio1/serio2/sensitivity
Note the starting values in case anything goes wrong, then fiddle around:
echo 255 | sudo tee /sys/devices/platform/i8042/serio1/serio2/sensitivity echo 255 | sudo tee /sys/devices/platform/i8042/serio1/serio2/speed
Some binary search themed prodding and a lot of tee: /sys/devices/platform/i8042/serio1/serio2/sensitivity: Numerical result out of range has confirmed that both files accept values between 0-255. Interestingly, setting them to 0 does not seem to disable the trackpoint completely.
If you’re wondering why the configuration settings look like ordinary files but choke on values bigger or smaller than a short, go read about sysfs.
|
This Week In Rust: This Week in Rust 93 |
Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.
This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.
86 pull requests were merged in the last week.
Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:
Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:
catch_panic
.CoerceUnsized
should ignore PhantomData
fields.\n
or \r\n
.Debug
of tuples, tuple structs and enum variants in a single line.x...y
expression to create an inclusive range.[Op]Assign
traits to allow overloading assignment operations like a += b
.str
and String
API.RawOs
marker traits.str::starts_with
and ends_with
.CStr::from_bytes
.If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.
No jobs listed for this week. Tweet us at @ThisWeekInRust to get your job offers listed here!
"unsafe
is as viral and pernicious as pop music, though obviously not as dangerous." — Daniel Keep at TRPLF.
Thanks to llogiq for the tip. Submit your quotes for next week!.
http://this-week-in-rust.org/blog/2015/08/24/this-week-in-rust-93/
|
Wladimir Palant: Missing a rationale for WebExtensions |
Mozilla’s announcement to deprecate XUL/XPCOM-based add-ons raises many questions. Seeing the reactions, it seems that most people are very confused now. I mean, I see where this is coming from. XUL and XPCOM have become a burden, they come at a huge memory/performance/maintenance cost, impose significant limitations on browser development and create the danger that a badly written extension breaks everything. Whatever comes to replace them certainly won’t give add-on developers the same flexibility however, especially when it comes to extending the user interface. This is sad but I guess that it has to be done.
What confuses me immensely however is WebExtensions which are touted as the big new thing. My experience with Chrome APIs isn’t all too positive, the possibilities here are very limited and there is a fair number of inconsistencies. The documentation isn’t great either, there are often undocumented details that you only hit when trying things out. This isn’t very surprising of course: the API has grown along with Chrome itself, and many of the newer concepts simply didn’t exist back when the first interfaces were defined. Even worse: Opera, despite using the same engine, occasionally implements the same APIs differently.
So my main question is: does Mozilla really plan to keep bug-for-bug compatibility with Chrome APIs all for the goal of cross-browser extensions? That’s an endless catch-up game that benefits Chrome way more than it helps Firefox. And what is this cross-browser story still worth once Mozilla starts adding their own interfaces which are incompatible to Chrome? Don’t forget that Chrome can add new APIs as well, maybe even for the same purpose as Mozilla but with a different interface.
Further, I don’t see any advantages of WebExtensions over the Add-on SDK. I wasn’t a fan of the SDK back when it was introduced but I must say that it really matured over the years. It took much time for the SDK to become a modern, consistent and flexible framework that it is right now. Mozilla invested significant effort into it and it paid off. What’s even more important, there is now sufficient documentation and examples for it on the web. Why throw all this away for a completely different framework? Note that the announcement says that most SDK-based extension will continue to work in the new world but according to the comments below it won’t be a development focus any more — from my experience, that’s an euphemism for “deprecated.”
As mentioned above, I don’t see how theoretical cross-browser compatibility is going to benefit Firefox. Maybe the advantage lies in the permission model? But Chrome’s permission model is broken — most extensions need APIs that could potentially allow them to access user’s browsing history. While most extensions won’t actually do anything like that, privacy violations are a very common issue with Chrome extensions. The privilege prompt doesn’t help users recognize whether there is a problem, it merely trains users to ignore privacy-related warnings. Oh, and it shifts the blame from Google to the user — the user has been warned, right?
In my opinion, this permission model cannot be seen as a substitute for reviews. Nor will it make reviews easier than for SDK-based extensions: it’s pretty easy to extract the list of SDK modules uses by an extension. That gives one pretty much the same information as a list of permissions, albeit without requiring an explicit list.
There must be a reason why Mozilla chose to develop WebExtension rather than focus that energy on improving the Add-on SDK. Sadly, I haven’t seen it mentioned anywhere so far.
https://palant.de/2015/08/23/missing-a-rationale-for-webextensions
|
Robert O'Callahan: Hooray For WebExtensions |
Many of the comments on WebExtensions miss an important point: basing WebExtensions on the Chrome extension API does not imply limiting WebExtensions to the Chrome API. For example, Bill already made it clear that we want to support addons like Tree Style Tabs and NoScript, which Chrome intentionally does not support. So Firefox addons will continue to be able to do things you can't do in Chrome (though there will be some things you can hack into Firefox's XUL today that won't be supported by WebExtensions, for sure).
WebExtensions is something we need to do and should have done many years ago, before Chrome even existed. It's what Jetpack should have been. But better late than never!
http://robert.ocallahan.org/2015/08/hooray-for-webextensions.html
|
Giorgio Maone: WebExtensions API & NoScript |
Many of you have read a certain announcement about the future of Firefox's add-ons and are worried about some extensions, including NoScript, being deeply rooted into those Mozilla's core technologies, such as XPCOM and XUL, which are going to be deprecated.
Developers and users are also concerned about add-ons being prevented from exploring radically new concepts which would require those "super powers" apparently taken away by the WebExtensions API.
I'd like to reassure them: Mozilla is investing a lot of resources to ensure that complex and innovative extensions can prosper also in the new Web-centric ecosystem. In fact, as mentioned by Bill McCloskey, at this moment I'm working within Mozilla's Electrolysis team and with other add-on authors, involved in the design of mechanisms and processes helping developers experiment in directions not supported yet by the "official" the WebExtensions API, which is going to be augmented and shaped around their needs and with their contributions.
I've just published a proposal, tentatively called native.js, to "embrace & extend" the WebExtensions API: all the interested parties are invited to discuss it on discourse.mozilla-community.org.
https://hackademix.net/2015/08/22/webextensions-api-noscript/
|
Alex Vincent: My two cents on WebExtensions, XPCOM/XUL and other announcements |
(tl;dr: There’s a lot going on, and I have some sage, if painful, advice for those who think Mozilla is just ruining your ability to do what you do. But this advice is worth exactly what you pay to read it. If you don’t care about a deeper discussion, just move to the next article.)
The last few weeks on Planet Mozilla have had some interesting moments: great, good, bad, and ugly. Honestly, all the recent traffic has impacts on me professionally, both present and future, so I’m going to respond very cautiously here. Please forgive the piling on – and understand that I’m not entirely opposed to the most controversial piece.
First of all, I’m more focused on running custom XUL apps via firefox -app than I am on extensions to base-line Firefox. I read the announcement about this very, very carefully. I note that there was no mention of XUL applications being affected, only XUL-based add-ons. The headline said “Deprecration of XUL, XPCOM…” but the text makes it clear that this applies mostly to add-ons. So for the moment, I can live with it.
Mozilla’s staff has been sending mixed messages, though. On the one hand, we’re finally getting a Firefox-based SDK into regular production. (Sorry, guys, I really wish I could have driven that to completion.) On the other, XUL development itself is considered dead – no new features will be added to the language, as I found to my dismay when a XUL tree bug I’d been interested in was WONTFIX’ed. Ditto XBL, and possibly XPCOM itself. In other words, what I’ve specialized in for the last dozen years is becoming obsolete knowledge.
I mean, I get it: the Web has to evolve, and so do the user-agents (note I didn’t say “browsers”, deliberately) that deliver it to human beings have to evolve too. It’s a brutal Darwinian process of not just technologies, but ideas: what works, spreads – and what’s hard for average people (or developers) to work with, dies off.
But here’s the thing: Mozilla, Google, Microsoft, and Opera all have huge customer bases to serve with their browser products, and their customer bases aren’t necessarily the same as yours or mine (other developers, other businesses). In one sense we should be grateful that all these ideas are being tried out. In another, it’s really hard for third-parties like FileThis or TenFourFox or NoScript or Disruptive Innovations, who have much less resources and different business goals, to keep up with that brutally fast Darwinian pace these major companies have set for themselves. (They say it’s for their customers, and they’re probably right, but we’re coughing on the dust trails they kick up.) Switching to an “extended support release” branch only gives you a longer stability cycle… for a while, anyway, and then you’re back in catch-up mode.
A browser for the World Wide Web is a complex beast to build and maintain, and growing more so every year. That’s because in the mad scramble to provide better services for Web end-users, they add new technologies and new ideas rapidly, but they also retire “undesirable” technologies. Maybe not so rapidly – I do feel sympathy for those who complain about CSS prefixes being abused in the wild, for example – but the core products of these browser providers do eventually move on from what, in their collective opinions, just isn’t worth supporting anymore.
So what do you do if you’re building a third-party product that relies on Mozilla Firefox supporting something that’s fallen out of favor?
Well, obviously, the first thing you do is complain on your weblog that gets syndicated to Planet Mozilla. That’s what I’m doing, isn’t it?
Ultimately, though, you have to own the code. I’m going to speak very carefully here.
In economic terms, we web developers deal with an oligopoly of web browser vendors: a very small but dominant set of players in the web browsing “market”. They spend vast resources building, maintaining and supporting their products and largely give them away for free. In theory the barriers to entry are small, especially for Webkit-based browsers and Gecko: download the source, customize it, build and deploy.
In practice… maintenance of these products is extremely difficult. If there’s a bug in NSS or the browser devtools, I’m not the best person to fix it. But I’m the Mozilla expert where I work, and usually have been.
I think it isn’t a stretch to say that web browsers, because of the sheer number of features needed to satisfy the average end-user, rapidly approach the complexity of a full-blown operating system. That’s right: Firefox is your operating system for accessing the Web. Or Chrome is. Or Opera, or Safari. It’s not just HTML, CSS and JavaScript anymore: it’s audio, video, security, debuggers, automatic updates, add-ons that are mini-programs in their own right, canvases, multithreading, just-in-time compilation, support for mobile devices, animations, et cetera. Plus the standards, which are also evolving at high frequencies.
My point in all this is as I said above: we third party developers have to own the code, even code bases far too large for us to properly own anymore. What do I mean by ownership? Some would say, “deal with it as best you can”. Some would say, “Oh yeah? Fork you!” Someone truly crazy (me) would say, “consider what it would take to build your own.”
I mean that. Really. I don’t mean “build your own.” I mean, “consider what you would require to do this independently of the big browser vendors.”
If that thought – building something that fits your needs and is complex enough to satisfy your audience of web end-users, who are accustomed to what Mozilla Firefox or Google Chrome or Microsoft Edge, etc., provide them already, complete with back-end support infrastructure to make it seamlessly work 99.999% of the time – scares you, then congratulations: you’re aware of your limited lifespan and time available to spend on such a project.
For what it’s worth, I am considering such an idea. For the future, when it comes time to build my own company around my own ideas. That idea scares the heck out of me. But I’m still thinking about it.
Just like reading this article, when it comes to building your products, you get what you pay for. Or more accurately, you only own what you’re paying for. The rest of it… that’s a side effect of the business or industry you’re working in, and you’re not in control of these external factors you subconsciously rely on.
Bottom line: browser vendors are out to serve their customer bases, which are tens of millions, if not hundreds of millions of people in size. How much of the code, of the product, that you are complaining about do you truly own? How much of it do you understand and can support on your own? The chances are, you’re relying on benevolent dictators in this oligopoly of web browsers.
It’s not a bad thing, except when their interests don’t align with yours as a developer. Then it’s merely an inconvenience… for you. How much of an inconvenience? Only you can determine that.
Then you can write a long diatribe for Planet Mozilla about how much this hurts you.
|