Soledad Penades: Score another one for the web! |
Last week I made a quick trip to Spain. It was a pretty early flight and I was quite sleepy and so… I totally forgot my laptop! I indeed thought that my bag felt “a bit weird”, as the laptop makes the back flat (when it’s in the bag), but I was quite zombified, and so I just kept heading to the station.
I realised my laptop wasn’t there by the time I had to take my wallet out to buy a train ticket. You see, TFL have been making a really big noise about the fact that you can now use your Oyster to travel to Gatwick. But they have been very quiet about requiring people to have enough credit in their cards to pay the full amount of the ticket. And since I use “auto top up”, sometimes my card might have lb18. Sometimes it won’t, as in this case.
Anyway, I didn’t want to go back for the laptop, as I was going on a short holidays trip, and a break from computers would be good. Except… I did have stuff to do, namely researching for my next trip!
I could use my phone, but I quite dislike using phones for researching trips: the screen is just too small, the keyboard is insufferable, and I want to open many tabs, look at maps, go back and forth, which isn’t easy on a phone, etc. I could also borrow some relative’s laptop… or I could try to resuscitate and old tablet that I hadn’t used since 2013!
It had become faulty at the beginning of 2013, but I thought I had fixed it. But months after, it decided to enter its mad loop of “restart, restart, restart and repeat” during a transatlantic flight. I had to hide it in my bag and let it expire its battery. And then I was very bored during both the rest of the flight, and the flight back, as all my carefully compiled entertainment was on it. Bah! And so I stopped using it and at some point I brought it to Spain, “just in case”.
Who would have guessed I’d end up using it again!?
I first spent about 30 minutes looking for a suitable plug for the charger. This tablet requires 2A and all the USB chargers I could find were 0.35A or 0.5A. The charger only had USA style pins, but that part could be removed, and revealed a “Mickey mouse” connector, or C7/C8 coupler if you want to be absolutely specific. A few years ago you could find plenty of appliances using this connector, but nowadays? I eventually found the charger for an old camera, with one of these cables! So I made a Frankenchargenstein out of old parts. Perfect.
The tablet took a long time to even show the charging screen. After a while I could finally turn it on, and oh wow, Android has changed a lot for the better since 3.1. But even if this tablet could be updated easily, I had no laptop and no will to install developer tools on somebody else’s laptop. So I was stuck in 3.1.
The Play Store behaved weirdly, with random glitches here and there. Many apps would not show any update, as developers have moved on to use newer versions of the SDK in order to use new hardware features and what not, and I don’t blame them, because programming apps that can work with different SDKs and operating system versions in Android is a terribly painful experience. So the easiest way to deal with old hardware or software versions is just not supporting them at all. But this leaves out people using outdated devices.
One of these “discriminatory apps” I wanted to install for my research was a travel app which lets you save stuff you would like to visit, and displays it on a map, which is very convenient for playing it by ear when you’re out and about. Sadly, it did not offer a version compatible with my device.
But I thought: Firefox still works in Android 3.1!
I got it updated to the latest version and opened the website for this app/service, and guess what? I could access the same functionalities I was looking for, via the web.
And being really honest, it was even better than using the app. I could have a tab with the search results, and open the interesting ones in a different tab, then close them when I was done perusing, without losing the scrolling point in the list. You know… like we do with normal websites. And in fact we’re not even doing anything superspecial with the app either. It’s not like it’s a high end game or like it works offline (which it doesn’t). Heck, it doesn’t even work properly when the network is a bit flaky… like most of the apps out there
http://soledadpenades.com/2016/02/10/score-another-one-for-the-web/
|
Air Mozilla: Quality Team (QA) Public Meeting, 10 Feb 2016 |
The bi-monthly status review of the Quality team at Mozilla. We showcase new and exciting work on the Mozilla project, highlight ways you can get...
https://air.mozilla.org/quality-team-qa-public-meeting-20160210/
|
Air Mozilla: The Joy of Coding - Episode 44 |
mconley livehacks on real Firefox bugs while thinking aloud.
|
Chris H-C: SSE2 Support in Firefox Users |
Let me tell you a story.
Intel invented the x86 assembly language back in the Dark Ages of the Late 1970s. It worked, and many CPUs implemented it, consolidating a fragmented landscape into a more cohesive and compatible whole. Unfortunately, x86 had limitations, so in time it would have to go.
Lo, the time came in the Middle Ages of the Mid 1980s when x86 had to be replaced with something that could handle 32-bit widths for numbers and addresses. And more registers. And yet more addressing modes.
But x86 was popular, so Intel didn’t replace it. Instead they extended it with something called IA-32. And it was popular as well, not least because it was backwards-compatible with basic x86: all of the previous x86 programs would work on x86 + IA-32.
By now, personal and business computing was well in the mainstream. This means Intel finally had some data on what, at the lowest level, programmers were wanting to run on their chips.
It turns out that most of the heaviest computations people wanted to do on computers were really simple to express: multiply this list of numbers by a number, add these two lists of numbers together… spreadsheet sorts of things. Finance sorts of things.
But also video games sorts of things. Windows 95 released with DirectX and unleashed a flood of computer gaming. To the list we can now add: move every point and pixel of this 3D model forward by one step, transform all of this geometry and these textures from this camera POV to that one, recolour this sprite’s pixels to be darker to account for shade.
The structure all of these (and a lot of other) tasks had in common was that they all wanted to do one thing (multiply, add, move, transform, recolour) over multiple pieces of data (one list of numbers, multiple lists of numbers, points and pixels, geometry and textures, sprite colours).
SIMD stands for Single Instruction Multiple Data and is how computer engineers describe these sorts of “do one action over and over again to every individual element in this list of data” operations.
So, for Intel’s new flagship “Pentium” processor they were releasing in 1997 they introduced a new extension: MMX (which doesn’t stand for anything. They apparently chose the letters because they looked cool). MMX lets you do some of those SIMD things directly at the lowest level of the computer with the limitation that you can’t also be performing high-precision math at the same time.
AMD was competing with Intel. Not happy with the limitations of the MMX extension, they developed their own x86 extension “3DNow!” which performed the same operations, but without the limitations and with higher precision.
Intel retaliated with SSE: Streaming SIMD Extensions. They shipped it on their Pentium III processors starting in ’99. It wasn’t a full replacement for MMX, though, so they had to quickly follow it up in the Pentium 4.
Which finally brings us to SSE2. First released in 2001 in the Pentium 4 line of processors (also implemented by AMD two years later in their Opteron line), it reimplemented MMX’s capabilities without its shortcomings (and added some other capabilities at the same time).
So why am I talking ancient history? 2001 was fifteen years ago. What use do we have for this lesson on SSE2 when even SSE4 has been around since ’07, and AVX-512 will ship on real silicon within months?
Well, it turns out that Firefox doesn’t assume you have SSE2 on your computer. It can run on fifteen-year-old hardware, if you have it.
There are some code paths that benefit strongly from the ability to run the SIMD instructions present in SSE2. If Mozilla can’t assume that everyone running Firefox has a computer capable of running SSE2, Firefox has to detect, at runtime, whether the user’s computer is capable of using that fast path.
This makes Firefox bigger, slower, and harder to test and maintain.
A question came up on the dev-platform mailing list about how many Firefox users are actually running computers that lack SSE2. I live in a very rich country and have a very privileged job. Any assumption I make about who does and does not have the ability to run computers that are newer than fifteen years old is going to be clouded by biases I cannot completely account for.
So we turn to the data. Which means Telemetry. Which means I get to practice writing custom analyses. (Yay!)
It turns out that, if a Firefox user has Telemetry enabled, we ask that user’s computer about a lot of environmental information. What is your operating system? What version? How much RAM do you have installed? What graphics card do you have? What version is its driver?
And, yes: What extensions does your CPU support?
We collect this information to determine from real users machines whether a particular environmental variable makes Firefox behave poorly. In the not-too-distant past there was a version of Firefox that would just appear black. No reason, no recourse, no explanation. By examining environmental data we were able to track down what combination of graphics cards and driver versions were susceptible to this and develop a fix within days.
(If you want to see an application of this data yourself, here is a dashboard showing the population breakdown of Firefox users. You can use it to see how much of the Firefox user base is like you. For me, less than 1% of the user base was running a computer like mine with a Firefox like mine, reinforcing that what I might think makes sense may not exactly be representative of reality for Firefox users.)
So I asked of the data: of all the users reporting Telemetry on Thursday January 21, 2016, how many have SSE2 capability on their CPUs?
And the answer was: about 99.5%
This would suggest that at most 0.5% of the Firefox release population are running CPUs that do not have SSE2. This is not strictly correct (there are a variety of data science reasons why we cannot prove anything about the population that doesn’t report SSE2 capability), but it’s a decent approximation so let’s go with it.
From there, as with most Telemetry queries, there were more questions. The first was: “Are the users not reporting SSE2 support keeping themselves on older versions of Firefox?” This is a good question because, if the users are keeping themselves on old versions, we can enable SSE2 support in new versions and not worry about the users being unable to upgrade because they already chose not to.
Turns out, no. They’re not.
With such a small population we’re subdividing (0.5%) it’s hard to say anything for certain, but it appears as though they are mostly running up-to-date versions of Firefox and, thus, would be impacted by any code changes we release. Ah, well.
The next questions were: “We know SSE2 is required to run Windows 8. Are these users stuck on Windows XP? Are there many Linux users without SSE2?”
Turns out: yes and no. Yes, they almost all are on Windows XP. No, basically none of them are running Linux.
Support and security updates for Windows XP stopped on April 8, 2014. It probably falls under Mozilla’s mission to try and convince users still running XP to upgrade themselves if possible (as they did on Data Privacy Day), to improve the security of the Internet and to improve those users’ access to the Web.
If you are running Windows XP, or administer a family member’s computer who is, you should probably consider upgrading your operating system as soon as you are able.
If you are running an older computer and want to know if you might not have SSE2, you can open a Firefox tab to about:telemetry and check the Environment section. Under system should be a field “cpu.extensions” that will contain the token “hasSSE2” if Firefox has detected that you have SSE2.
(If about:telemetry is mostly blank, try clicking on the ‘Change’ links at the top labelled “FHR data upload is disabled” and “Extended Telemetry recording is disabled” and then restarting your Firefox)
SSE2 will probably be coming soon as a system requirement for Firefox. I hope all of our users are ready for when that happens.
:chutten
https://chuttenblog.wordpress.com/2016/02/10/sse2-support-in-firefox-users/
|
Luis Villa: Reinventing FOSS user experiences: a bibliography |
There is a small genre of posts around re-inventing the interfaces of popular open source software; I thought I’d collect some of them for future reference:
Recent:
Older:
The first two (Drupal, WordPress) are particularly strong examples of the genre because they directly grapple with the difficulty of change for open source projects. I’m sure that early Firefox and VE discussions also did that, but I can’t find them easily – pointers welcome.
Other suggestions welcome in comments.
http://lu.is/blog/2016/02/10/reinventing-foss-user-experiences-a-bibliography/
|
Robert O'Callahan: Introducing rr Chaos Mode |
Most of my rr talks start with a slide showing some nondeterministic failures in test automation while I explain how hard it is to debug such failures and how record-and-replay can help. But the truth is that until now we haven't really used rr for that, because it has often been difficult to get nondeterministic failures in test automation to show up under rr. So rr's value has mainly been speeding up debugging of failures that were relatively easy to reproduce. I guessed that enhancing rr recording to better reproduce intermittent bugs is one area where a small investment could quickly pay off for Mozilla, so I spent some time working on that over the last couple of months.
Based on my experience fixing nondeterministic Gecko test failures, I hypothesized that our nondeterministic test failures are mainly caused by changes in scheduling. I studied a particular intermittent test failure that I introduced and fixed, where I completely understood the bug but the test had only failed a few times on Android and nowhere else, and thousands of runs under rr could not reproduce the bug. Knowing what the bug was, I was able to show that sleeping for a second at a certain point in the code when called on the right thread (the ImageBridge thread) at the right moment would reproduce the bug reliably on desktop Linux. The tricky part was to come up with a randomized scheduling policy for rr that would produce similar results without prior knowledge of the bug.
I first tried the obvious: allow the lengths of timeslices to vary randomly; give threads random priorities and observe them strictly; reset the random priorities periodically; schedule threads with the same priority in random order. This didn't work, for an interesting reason. To trigger my bug, we have to avoid scheduling the ImageBridge thread while the main thread waits for a 500ms timeout to expire. During that time the ImageBridge thread is the only runnable thread, so any approach that can only influence which runnable thread to run next (e.g. CHESS) will not be able to reproduce this bug.
To cut a long story short, here's an approach that works. Use just two thread priorities, "high" and "low". Make most threads high-priority; I give each thread a 0.1 probability of being low priority. Periodically re-randomize thread priorities. Randomize timeslice lengths. Here's the good part: periodically choose a short random interval, up to a few seconds long, and during that interval do not allow low-priority threads to run at all, even if they're the only runnable threads. Since these intervals can prevent all forward progress (no control of priority inversion), limit their length to no more than 20% of total run time. The intuition is that many of our intermittent test failures depend on CPU starvation (e.g. a machine temporarily hanging), so we're emulating intense starvation of a few "victim" threads, and allowing high-priority threads to wait for timeouts or input from the environment without interruption.
With this approach, rr can reproduce my bug in several runs out of a thousand. I've also been able to reproduce a top intermittent (now being fixed), an intermittent test failure that was assigned to me, and an intermittent shutdown hang in IndexedDB we've been chasing for a while. A couple of other people have found this enabled reproducing their bugs. I'm sure there are still bugs this approach can't reproduce, but it's good progress.
I just landed all this work on rr master. The normal scheduler doesn't do this randomization, because it reduces throughput, i.e. slows down recording for easy-to-reproduce bugs. Run rr record -h to enable chaos mode for hard-to-reproduce bugs.
I'm very interested in studying more cases where we figure out a bug that rr chaos mode was not able to reproduce, so I can extend chaos mode to find such bugs.
http://robert.ocallahan.org/2016/02/introducing-rr-chaos-mode.html
|
Michael Kaply: Mac Admin and Developer Conference UK Presentation |
I did a presentation today at the Mac Admin and Developer Conference UK on configuring Firefox.
I’m making the slides available now, and will link to the video when it is available.
Also, here is a link to the demo AutoConfig file.
https://mike.kaply.com/2016/02/09/mac-admin-and-developer-conference-uk-presentation/
|
The Mozilla Blog: The Internet is a Global Public Resource |
One of the things that first drew me to Mozilla was this sentence from our manifesto:
“The Internet is a global public resource that must remain open and accessible to all.”
These words made me stop and think. As they sunk in, they made me commit.
I committed myself to the idea that the Internet is a global public resource that we all share and rely on, like water. I committed myself to stewarding and protecting this important resource. I committed myself to making the importance of the open Internet widely known.
When we say, “Protect the Internet,” we are not talking about boosting Wi-fi so people can play “Candy Crush” on the subway. That’s just bottled water, and it will very likely exist with or without us. At Mozilla, we are talking about “the Internet” as a vast and healthy ocean.
We believe the health of the Internet is an important issue that has a huge impact on our society. An open Internet—one with no blocking, throttling, or paid prioritization—allows individuals to build and develop whatever they can dream up, without a huge amount of money or asking permission. It’s a safe place where people can learn, play and unlock new opportunities. These things are possible because the Internet is an open public resource that belongs to all of us.
Making the Internet a Mainstream Issue
Not everyone agrees that the health of the Internet is a major priority. People think about the Internet mostly as a “thing” other things connect to. They don’t see the throttling or the censorship or the surveillance that are starting to become pervasive. Nor do they see how unequal the benefits of the Internet have become as it spreads across the globe. Mozilla aims to make the health of the Internet a mainstream issue, like the environment.
Consider the parallels with the environmental movement for a moment. In the 1950s, only a few outdoor enthusiasts and scientists were talking about the fragility of the environment. Most people took clean air and clean water for granted. Today, most of know we should recycle and turn out the lights. Our governments monitor and regulate polluters. And companies provide us with a myriad of green product offerings—from organic food to electric cars.
But this change didn’t happen on its own. It took decades of hard work by environmental activists before governments, companies and the general public took the health of the environment seriously as an issue. This hard work paid off. It made the environment a mainstream issue and got us all looking for ways to keep it healthy.
When in comes to the health of the Internet, it’s like we’re back in the 1950s. A number of us have been talking about the Internet’s fragile state for decades—Mozilla, the EFF, Snowden, Access, the ACLU, and many more. All of us can tell a clear story of why the open Internet matters and what the threats are. Yet we are a long way from making the Internet’s health a mainstream concern.
We think we need to change this, so much so that it’s now one of Mozilla’s explicit goals.
Read Mark Surman’s “Mozilla Foundation 2020 Strategy” blog post.
Starting the Debate: Digital Dividends
The World Bank’s recently released “2016 World Development Report” shows that we’re making steps in the right direction. Past editions have focused on major issues like “jobs.” This year the report focuses directly on “digital dividends” and the open Internet.
According to the report, the benefits of the Internet, like inclusion, efficiency, and innovation, are unequally spread. They could remain so if we don’t make the Internet “accessible, affordable, and open and safe.” Making the Internet accessible and affordable is urgent. However,
“More difficult is keeping the internet open and safe. Content filtering and censorship impose economic costs and, as with concerns over online privacy and cybercrime, reduce the socially beneficial use of technologies. Must users trade privacy for greater convenience online? When are content restrictions justified, and what should be considered free speech online? How can personal information be kept private, while also mobilizing aggregate data for the common good? And which governance model for the global internet best ensures open and safe access for all? There are no simple answers, but the questions deserve a vigorous global debate.”
—”World Development Report 2016: Main Messages,” p.3
We need this vigorous debate. A debate like this can help make the open Internet an issue that is taken seriously. It can shape the issue. It can put it on the radar of governments, corporate leaders and the media. A debate like this is essential. Mozilla plans to participate and fuel this debate.
Creating A Public Conversation
Of course, we believe the conversation needs to be much broader than just those who read the “World Development Report.” If we want the open Internet to become a mainstream issue, we need to involve everyone who uses it.
We have a number of plans in the works to do exactly this. They include collaboration with the likes of the World Bank, as well as our allies in the open Internet movement. They also include a number of experiments in a.) simplifying the “Internet as a public resource” message and b.) seeing how it impacts the debate.
Our first experiment is an advertising campaign that places the Internet in a category with other human needs people already recognize: Food. Water. Shelter. Internet. Most people don’t think about the Internet this way. We want to see what happens when we invite them to do so.
The outdoor campaign launches this week in San Francisco, Washington and New York. We’re also running variations of the message through our social platforms. We’ll monitor reactions to see what it sparks. And we will invite conversation in our Mozilla social channels (Facebook & Twitter).
Fueling the Movement
Of course, billboards don’t make a movement. That’s not our thinking at all. But we do think experiments and debates matter. Our messages may hit the mark with people and resonate, or it may tick them off. But our goal is to start a conversation about the health of the Internet and the idea that it’s a global resource that needs protecting.
Importantly, this is one experiment among many.
We’re working to bolster the open Internet movement and take it mainstream. We’re building easy encryption technology with the EFF (Let’s Encrypt). We’re trying to make online conversation more inclusive and open with The New York Times and The Washington Post (Coral Project). And we’re placing fellows and working on open Internet campaigns with organizations like the ACLU, Amnesty International, and Freedom of the Press Foundation (Open Web Fellows Program). The idea is to push the debate on many fronts.
About the billboards, we want to know what you think:
I’m hoping it does, but I’m also ready to learn from whatever the results may tell us. Like any important issue, keeping the Internet healthy and open won’t happen by itself. And waiting for it to happen by itself is not an option.
We need a movement to make it happen. We need you.
https://blog.mozilla.org/blog/2016/02/08/the-internet-is-a-global-public-resource/
|
The Mozilla Blog: Martin Thomson Appointed to the Internet Architecture Board |
Standards are a key part of keeping the Open Web open. The Web runs on standards developed mainly by two standards bodies: the World Wide Web Consortium (W3C), which standardizes HTML and Web APIs, and the Internet Engineering Task Force (IETF), which standardizes networking protocols, such as HTTP and TLS, the core transport protocols for the Web. I’m pleased to announce that Martin Thomson, from the CTO group, was recently appointed to the Internet Architecture Board (IAB), the committee responsible for the architectural oversight of the IETF standards process.
Martin’s appointment recognizes a long history of major contributions to the Internet standards process: including serving as editor for HTTP/2, the newest and much improved version of HTTP, helping to design, implement, and document WebPush, which we just launched in Firefox, and playing major roles in WebRTC, TLS and Geolocation. In addition to his standards work, Martin has committed code all over Gecko, in areas ranging from the WebRTC stack to NSS. Serving on the IAB will give Martin a platform to do even greater things for the Internet and the Open Web as a whole.
Please join me in congratulating Martin.
|
Jorge Villalobos: WebExtensions presentation at FOSDEM 2016 |
Last week, a big group of Mozillians converged in Brussels, Belgium for FOSDEM 2016. FOSDEM is a huge free and open source event, with thousands of attendees. Mozilla had a stand and a “dev room” for a day, which is a room dedicated to Mozilla presentations.
This year I attended for the first time, and I gave a presentation titled Building Firefox Add-ons with WebExtensions. The presentation covers some of the motivations behind the new API. I also spent a little time going over one of the WebExtensions examples on MDN. I only had 30 minutes for the whole talk, so it was all covered fairly quickly.
The presentation went well, and there were lots of people showing interest and asking questions. I felt that for all of the Mozilla presentations I attended, which makes me want to kick myself for not trying to go to FOSDEM before. It’s a great venue to discuss our ideas, and I want us to come back and do more. We have lots of European contributors and have been looking for a good venue where to have a meetup. This looks ideal, so maybe next year ;).
http://xulforge.com/blog/2016/02/webextensions-presentation-at-fosdem-2016/
|
Florian Qu`eze: Project ideas wanted for Summer of Code 2016 |
Google is running Summer of Code again in 2016. Mozilla has had the pleasure of participating many years so far, and even though we weren't selected last year, we are hoping to participate again this year. In the next few weeks, we need to prepare a list of suitable projects to support our application.
Can you think of a 3-month coding project you would love to guide a student through? This is your chance to get a student focusing on it for 3 months! Summer of Code is a great opportunity to introduce new people to your team and have them work on projects you care about but that aren't on the critical path to shipping your next release.
Here are the conditions for the projects:
If you have an idea, please put it on the Brainstorming page, which is our idea development scratchpad. Please follow the instructions at the top.
The deadline to submit project ideas and help us be selected by Google is February 19th.
Note for students: the student application period starts on March 14th, but the sooner you start discussing project ideas with potential mentors, the better.
http://blog.queze.net/post/2016/02/08/Project-ideas-wanted-for-Summer-of-Code-2016
|
Air Mozilla: Mozilla Weekly Project Meeting, 08 Feb 2016 |
The Monday Project Meeting
https://air.mozilla.org/mozilla-weekly-project-meeting-20160208/
|
Nikki Bee: Okay, but What Does Your Work Actually Mean, Nikki? Part 3: Translating A Standard Into Code |
Over my previous two posts, I described my introduction to work on Servo, and my experience with learning and modifying the Fetch Standard. Now I’m going to combine these topics today, as I’ll be talking about what it’s like putting the Fetch Standard into practice in Servo. The process is roughly: I pick a part of the Fetch Standard to implement or improve; I write it on a step-by-step basis, often asking many questions; then when I feel it’s mostly complete, I submit my code for review and feedback.
I will talk about the review aspect in my next post, along with other things, as this entry ended up being pretty long!
Whenever I realize I’m not sure what to be doing for the day, I go over my list of tasks, often talking with my project mentor about what I can do next. There’s a lot more work then I could manage in any internship - especially a 3 month long one - so having an idea of what aspects are the most important is good to keep in mind. Plus, I’m not equally skilled or knowledgeable about every aspect of Fetch or programming in Rust, and learning a new area more than halfway through my internship could be a significant waste of time. So, the main considerations are: “How important is this for Servo?”, “Will it take too long to implement?”, and “Do I know how to do this?”.
Often, my Servo mentor or co-workers are the only people who can answer “How important is this?”, since they’ve all been with Servo for much longer than me, and take a broader level view- personally, I only think of Servo in terms of the Fetch implementation, which isn’t too far off from reality: the Fetch implementation will be used by a number of parts of Servo which can easily use any of it, with clear boundaries for what should be handled by Fetch itself.
I’m not too concerned with what’s most important for the Fetch implementation, since I can’t answer it by myself. There’s always multiple things I could be doing, and I have a better idea for answering the other two aspects.
“Will it take too long to implement?” is probably the hardest question, but one that gets just a bit easier all the time. Simply put, the more code I write, the better I can predict how long any specific task will take me to accomplish. There are always sort of random chances though: sometimes I run into surprising blocks for a single line of code; or I spend just half a day writing an entire Fetch step with no problems. Maybe with years of experience I will see those easy successes or hard failures coming as well, but for now, I’ll have to be content with rough estimates and a lenient time schedule.
The last question, “Do I know how to do this?”, depends greatly on what “this” is for me to be able to answer. Learning new aspects of Rust is always a closed book to me, in a way- I don’t know how difficult or simple any aspect of it will be to learn. Not to mention, just reading something has a minimal effect on my understanding. I need to put it into practice, multiple times, for me to really understand what I’m doing.
Unfortunately, programming Servo (or any project, really) doesn’t necessarily line up the concepts I need to learn and use in a nice order, so I often need to juggle multiple concepts of varying complexity at once. For areas not specific to Rust though, I can often have a better idea of. Generalized programming ideas though, I can better gauge my ability for. Writing tests? Sure, I’ve done that plenty- it shouldn’t be difficult to apply that to Rust. Writing code that handles multiple concurrent threads? I’ve only learned enough to know that those buzzwords mean something- I’d probably need a month to be taught it well!
Right now, and for the past while, my work on Servo has been focused on writing tests to make sure my implementation of Fetch, and the functions written previously by my co-workers, conform to what’s expected by the Fetch Standard. What factors in to deciding this is a good task though?
Most of the steps for Fetch are mostly complete by now. The steps that aren’t coded either cannot be done yet in Servo, or are not necessary for a minimally working Fetch protocol. Sure, I can make the Rust compiler happy- but just because I can run the code at all doesn’t mean it’s working right! Thus, before deciding that the basics of Fetch have been perfected and can be built on, extensive test coverage of the code is significantly important. Testing the code means I can intentionally create many situations, both to make sure the result matches the standard, and that errors come up at only the defined moments.
Writing the tests is often straightforward. I just need to add a lot of conditionals, such as: the Fetch Standard says a basic filtered response has the type
“basic”. That’s simple- I need to have the Fetch protocol return a basic filtered response, then verify that the type
is “basic”! And so on for every declaration the Fetch Standard makes. The trickier side of this though is that I can’t be absolutely sure, until I run a test, whether or not the existing code supports this.
It’s simple on paper to return a basic filtered response- but when I tell my Fetch implementation to do so, maybe it’ll work right away, or maybe I’m missing several steps or made mistakes that prevent it from happening! The time is a bit open ended as a result, but that can be weighed with how good it would be to catch and repair a significant error.
I have experience with writing tests, as I like them conceptually very much. Often, testing a program by hand is slow, difficult, and hard to reproduce errors. When the testing itself is written in code, everything is accessible and easily repeatable. If I don’t know what’s going on, I can update the test code to be more informative and run it again, or dig into it with a debugger. So I have the knowledge of tests themselves- but what about testing in Rust, or even Servo?
I can tell you the answer: I hadn’t foreseen much difficulty (reading over how to write tests in Rust was easy), but I ended up lacking a lot of experience with testing Servo. Since Servo is a browser engine, and the Fetch protocol deals with fetching any kind of resource a browser access, I need to handle having a resource on a running server for Fetch to retrieve. While this is greatly simplified thanks to some of the libraries Servo works with, it still took a lot of time for to have a good mental model to understand what I was doing in my head and thus, be able to write effective tests.
So I’ve said a lot about what goes into picking a task, but what about actually writing the code? That requires knowing how to a translate a step from the programming language-abstracted Fetch Standard into Rust code. Sometimes this is almost exactly like the original writing, such as step 1 of the Main Fetch function, “Let response be null”, which in Rust looks like this: let response = None;
.
In Rust, the let
keyword makes a variable binding- it’s just a coincidence that the Main Fetch step uses the same word. And Rust’s null operator is called None
(which declares a variable that is currently holding nothing, and cannot be used for anything while still None, but it sets aside some memory now for response
to be used later).
Of course, not every step is so literal to translate to Rust. Take step 10 of Main Fetch for instance: “If main fetch is invoked recursively, return response”. The first part of this is knowing what it’s saying, which is “if main fetch is invoked by itself or another function it invokes (ie., invoked recursively), return the response variable”. Translating that to Rust code gives us if main fetch is invoked recursively { return response }
. This isn’t very good- main fetch is invoked recursively
isn’t valid code, it’s a fragment of an English sentence.
The step doesn’t answer this for me, so I need to get answers elsewhere. There’s two things I can do: keep reading more of the Fetch Standard (and check personal notes I’ve made on it), or ask for help. I’ll do both, in that order. Right now, I have two questions I need answers to: “When is Main Fetch invoked recursively?”, and “How can I tell when Main Fetch is invoked recursively?”.
I often like to try to get answers myself before asking other people for help (although sometimes I do both at the same time, which has lead to me answering my own questions immediately after asking a few times). I think it’s a good ideal to spend a few minutes trying to learn something on my own, but to also not let myself be stuck on any one problem for about 15 minutes, and 30 minutes at the absolute worst. I want to use my own time well- asking a question I can answer on my own shortly isn’t necessary, but not asking a question that is beyond my capability can end up wasting a lot more time.
So I’ll start trying to figure out this step by answering for myself, “When is Main Fetch invoked recursively?”. Most functions in the Fetch Standard invoke at least one other function, which is how it all works- each part is separated so they can be understood in smaller chunks, and can easily repeat specific steps, such as invoking Main Fetch again.
What I would normally need to do here is read through the Fetch Standard to find at least one point where Main Fetch is invoked from itself or another function it invokes. Thankfully, I don’t have to go read through each of those two functions, and everything they call, and so on, until I happen to get my answer, because of part of my notes I took earlier, when I was reading through as much of the Fetch Standard as I could.
I had decided to make short notes declaring when each Fetch function calls another one, and in what step it happens. At the time I did so to help give me an understanding of the relations between all the Fetch functions- now, it’s going to help me pinpoint when Main Fetch is invoked recursively! First I look for what calls Main Fetch, other than the initial Fetch function which primarily serves to set up some basic values, then pass it on to Main Fetch. The only function that does so is HTTP Redirect Fetch, in step 15. I can also see that HTTP Fetch calls HTTP Redirect Fetch in step 5, and that Main Fetch calls HTTP Fetch in step 9.
That was easy! I’ve now answered the question: “Main Fetch is invoked recursively by HTTP Redirect Fetch.”
However, I’m still at a loss for answering “How can I tell when Main Fetch is invoked recursively?”. Step 15 of HTTP Redirect Fetch doesn’t say to set any variable that would say “oh yeah, this is a recursive call now”. In fact, no such variable is defined by the Fetch Standard to be used!* So, I’ll ask my Servo mentor on IRC.
This example I’m covering actually happened a month or so ago, so I’m not going to pretend I can remember exactly what the conversation was. But the result of it was that the solution is actually very simple: I add a new boolean (a variable that is just true
, or false
) parameter (a variable that must be sent to a function when invoking it) to Main Fetch that’ll say whether or not it’s being invoked recursively. When Main Fetch is invoked from the Fetch function, I set it to false
; when Main Fetch is invoked from HTTP Redirect Fetch, I set it to true
.
This is the typical process for every single function and step in the Fetch Standard. For every function I might implement, improve, or test, I repeat a similar decision process. For every step, I repeat a similar question-answer procedure, although unfortunately not all English to code translations are so shortly resolved as the examples in this post.
Sometimes picking a new part of the Fetch Standard to implement means it ends up relying on another piece, and needs to be put on hold for that, or occasionally my difficulty with a step might result in a request for change to the Fetch Standard to improve understanding or logic, as I’ve described in previous posts.
This, in sum with the other two parts of this series, effectively describes the majority of my work on Servo! Hopefully, you now have a better idea of what it all means.
Fourth post ending.
http://nikkisquared.github.io/2016/02/08/what-does-your-work-mean-part-3.html
|
Air Mozilla: SuMo weekly community call |
The Sumo (Support Mozilla)community meet every Monday in the SuMo vidyo channel meetings are about 30 minutes and start at 17:00 UTC
|
Brian King: Connected Devices at the Singapore Leadership Summit |
Around 150 Mozillians gathered in Singapore for education, training, skills building, and planning to bring them and their communities into the fold on the latest Participation projects. With the overarching theme being leadership training, we had two main tracks. The first was Campus Campaign, a privacy focused effort to engage students as we work more with that demographic this year. Second, and the focus of this post, is Connected Devices.
As we built out a solid Firefox OS Participation program last year, the organisation is moving more into Connected Devices. The challenge we have is to evolve the program to fit the new direction. However, the strategy and timeline has not been finalised, so in Singapore we needed to get people excited in a broader sense on what we are doing in this next phase of computing, and see what could be done now to start hacking on things.
We had three main sessions during the weekend. Here is a brief summary of each.
We haven’t been standing still since we announced the changes in December. During this sessions John Bernard walked us through why Mozilla is moving in this direction, how the Connected Devices team has been coming together, and how initial project proposals have been going through the ‘gating process’. The team will be structured in three parts based on the three core pillars of Core, Consumer, and Collaboration. The latter is in essence a participation team embedded, led by John. We talked about some of the early project ideas floating around, and we discussed possible uses of the foxfooding program, cunningly labelled ‘Outside the Fox’.
Dietrich Ayala then jumped in and talked about some of the platform APIs that we can use today to hook together the Web of Things. There are many ways to experiment today using existing Firefox OS devices and even Firefox desktop and mobile. The set of APIs in Firefox OS phones allow access to a wide range of sensors, enabling experimentation with physical presence detection, speech synthesis and recognition, and many types of device connectivity.
Check out the main sessions slides and Dietrich’s slides.
Led by Rina Jensen and Jared Cole, the Participation and Connected Devices teams have been working on a project to explore the open source community and understand what makes people contribute and be part of the communities, from open hardware projects to open data projects. During the session, some of the key insights from that work were shared. The group then partook in co-creation exercises, coming up with ideas for what an ideal contributor experience. During the session, a few key research insights were shared and provided to the group as input for a co-creation exercise. The participants then spent the next hour generating ideas focused on the ideal contributor experience. Rina and Jared are going to continue working closely with the Participation and Connected Devices teams to come up with a clear set of actionable recommendations.
You can find more information and links to session materials on the session wiki page.
True participation is working on all aspects of a project, from ideation, through implementation, to launch and beyond. The purpose of this session was two-fold:
On the first topic, we didn’t get very far on the day due to time constraints, but it is something we work on all the time of course at Mozilla. We have a good foundation built with the Firefox OS Participation project. Connected Devices is a field where we can innovate and excel, and we see a lot of excitement for all Mozillians to lead the way here. This discussion will continue.
For the second topic, we wanted to come out of the weekend with something tangible, to send a strong message that volunteer leadership is looking to the future and are ready to build things now. We heard some great ideas, and then broke out into teams to start working on them. The result is thirteen projects to get some energy behind, and I’m sure many more will arise.
In order to accelerate the next stage of tinkering and ideation, we’ve set up a small Mozilla Reps innovation fund to hopefully set in motion a more dynamic environment in which Mozillians can feel at home in.
Connected Devices, Internet of Things, Web of Things. You have heard many labels for essentially is the next era of computing. At Mozilla we want to ensure that technology-wise the Web is at the forefront of this revolution, and that the values we hold dear such as privacy are central. Now more than ever, open is important. Our community leaders are ready. Are you?
http://brian.kingsonline.net/talk/2016/02/connected-devices-at-the-singapore-leadership-summit/
|
QMO: Firefox 45 Beta 3 Testday Results |
Hello Mozillians!
As you may already know, last Friday – February 5th – we held a new Testday, for Firefox 45 Beta 3 and it was another successful event!
We’d like to take this opportunity to thank Iryna Thompson, Chandrakant Dhutadmal, Mohammed Adam, Vuyisile Ndlovu, Spandana Vadlamudi, Ilse Mac'ias, Bolaram Paul, gaby2300, 'Angel Antonio, Preethi Dhinesh and the people from our Bangladesh Community: Rezaul Huque Nayeem, Hossain Al Ikram, Raihan Ali, Moniruzzaman, Khalid Syfullah Zaman, Amlan BIswas, Abdullah Umar Nasib, Najmul Amin, Pranjal Chakraborty, Azmina Akter Papeya, Shaily Roy, Kazi Nuzhat Tasnem, Md Asaduzzaman John, Md.Tarikul Islam Oashi, Fahmida Noor, Fazle Rabbi, Md. Almas Hossain, Mahfuza Humayra Mohona, Syed Nayeem Roman, Saddam Hossain, Shahadat Hossain, Abdullah Al Mamun, Maruf Rahman, Muhtasim kabir, Ratul Ahmed, Mita Halder, Md Faysal Rabib, Tanvir Rahman, Tareq Saifullah, Dhiman roy, Parisa Tabassum, SamadTalukdar, Zubair Ahmed, Toufiqul haque Mamun, Md. Nurnobi, Sauradeep Dutta, Noban Hasan, Israt jahan, Md. Nazmus Shakib (Robin), Zayed News, Ashickur Rahman, Hasna Hena, Md. Rahimul islam, Mohammad Maruf Islam, Mohammed Jawad Ibne Ishaque, Kazi Nuzhat Tasnem and Wahiduzzaman Hridoy for getting involved in this event and making Firefox as best as it could be.
Results:
Also a big thank you goes to all our active moderators.
Keep an eye on QMO for upcoming events!
https://quality.mozilla.org/2016/02/firefox-45-beta-3-testday-results/
|
Daniel Glazman: Inventory and Strategy |
“There’s class warfare, all right, but it’s my class, the native class, that’s making war, and we’re winning.” -- Android and iOS, blatantly stolen from Warren Buffet
Firefox OS tried to bring Web apps to the mobile world and it failed. It has been brain dead - for phones - for three days and the tubes preserving its life will be turned off in May 2016. I don't believe at all myself in the IoT space being a savior for Mozilla. There are better and older competitors in that space, companies or projects that bring smaller, faster, cleaner software architectures to IoT where footprint and performance are an even more important issue than in the mobile space. Yes, this is a very fragmented market; no, I'm not sure FirefoxOS can address it and reach the critical mass. In short, I don't believe in it at all.
Maybe it's time to discuss a little bit a curse word here: strategy. What would be a strategy for the near- and long-term future for Mozilla? Of course, what's below remains entirely my own view and I'm sure some readers will find it pure delirium. I don't really mind.
To do that, let's look a little bit at what Mozilla has in hands, and let's confront that and the conclusion drawn from the previous lines: native apps have won, at least for the time being.
We also need to take a look at Mozilla's past. This is not an easy nor pleasant inventory to make but I think it must be done here and to do it, we need to go back as far in time as the Netscape era.
Technology | Year(s) | Result |
Anya | 2003 | AOL (Netscape's parent company) did not want of Anya, a remote browser moving most of the CPU constraints to the server, and it died despite of being open-sourced by its author. At the same time, Opera successfully launched Opera Mini and eventually acquired its SkyFire competitor. Opera Mini has been a very successful product on legacy phones and even smartphones in areas with poor mobile connectivity. |
XUL | 2003- | Netscape - and later Mozilla - did not see any interest in bringing XUL to Standards committees. When competitors eventually moved to XML-based languages for UI, they adopted solutions (XAML, Flex, ...) that were not interoperable with it. |
Operating System | 2003- | A linux+Gecko Operating System is not a new idea. It was already discussed back in 2003 - yes, 2003 - at Netscape and was too often met with laughter. It was mentioned again multiple times between 2003 and 2011, without any apparent success. |
Embedding | 2004- | Embedding has always been a poor parent in Gecko's family. Officially dropped loooong ago, it drove embedders to WebKit and then Blink. At the time embedding should have been improved, the focus was solely on Firefox for desktop. If I completely understand the rationale behind a focus on Firefox for desktop at that time, the consequences of abandoning Embedding have been seriously underestimated. |
Editing | 2005- | Back in 2004/2005, it was clear Gecko had the best in-browser core editor on the market. Former Netscape editor peers working on Dreamweaver compared mozilla/editor and what Macromedia/Adobe had in hands. The comparison was vastly in favor of Mozilla. It was also easy to predict the aging Dreamweaver would soon need a replacement for its editor core. But editing was considered as non-essential at that time, more a burden than an asset, and no workforce was permanently assigned to it. |
Developer tools | 2005 | In 2005, Mozilla was so completely mistaken on Developer Tools, a powerful attractor for early adopters and Web Agencies, that it wanted to get rid of the error console. At the same moment, the community was calling for more developer tools. |
Runtime | 2003- | XULRunner has been quite successful for
such a complex technology. Some rather big companies believed enough
in it to implement apps that, even if you don't know their name, are
still everywhere. As an example, here's at least one very large
automotive group in Europe, a world-wide known brand, that uses
XULRunner in all its test environments for car engines. That means
all garages dealing with that brand use a XULRunner-fueled box... But unfortunately, XULrunner was never considered as essential, up to the point its name is still a codename. For some time, the focus was instead given to GRE, a shared runtime that was doomed to fail from the very first minute. |
Asian market | 2005 | While the Asian market was exploding, Gecko was missing a major feature: vertical writing. It prevented Asian embedders from considering Gecko as the potential rendering engine to embed in Ebook reading systems. It also closed access to the Asian market for many other usages. But vertical writing did not become an issue to fix for Mozilla until 2015. |
Thunderbird | 2007 | Despite of growing adoption of Thunderbird in governmental organizations and some large companies, Mozilla decided to spin off Thunderbird into a Mail Corporation because it was unable to get a revenue stream from it. MailCo was eventually merged back with Mozilla and Thunderbird is again in 2015/2016 in limbos at Mozilla. |
Client Customization Kit | 2003- | Let's be clear, the CCK has never been seen as a useful or interesting project. Maintained only by the incredible will and talent of a single external contributor, many corporations rely on it to release Firefox to their users. Mozilla had no interest in corporate users. Don't we spend only 60% of our daily time at work? |
E4X | 2005-2012 | Everyone had high expectations about E4X and and many were ready to switch to E4X to replace painful DOM manipulations. Unfortunately, it never allowed to manipulate DOM elements (BMO bug 270553), making it totally useless. E4X support was deprecated in 2012 and removed after Firefox 17. |
Prism (WebRunner) | 2007-2009 | Prism was a webrunner, i.e. a desktop platform to run standalone self-contained web-based apps. Call them widgets if you wish. Prism was abandoned in 2009 and replaced by Mozilla Chromeless that is itself inactive too. |
Marketplace | 2009 | Several people called for an improved marketplace where authors could sell add-ons and standalone apps. That required a licensing mechanism and the possibility to blackbox scripting. It was never implemented that way. |
Browser Ballot | 2010 | The BrowserChoice.eu thing was a useless battle. If it brought some users to Firefox on the Desktop, the real issue was clearly the lack of browser choice on iOS, world-wide. That issue still stands as of today. |
Panorama (aka Tab Groups) | 2010 | When Panorama reached light, some in the mozillian community (including yours truly) said it was bloated, not extensible, not localizable, based on painful code, hard to maintain on the long run and heterogeneous with the rest of Firefox, and it was trying to change the center of gravity of the browser. Mozilla's answer came rather sharply and Panorama was retained. In late 2015, it was announced that Panorama will be retired because it's painful to maintain, is heterogeneous with the rest of Firefox and nobody uses it... |
Jetpack | 2010 | Jetpack was a good step on the path towards HTML-based UI but a jQuery-like framework was not seen by the community as what authors needed and it missed a lot of critical things. It never really gained traction despite of being the "official" add-on way. In 2015, Mozilla announced it will implement the WebExtensions global object promoted by Google Chrome and WebExtensions is just a more modern and better integrated JetPack on steroids. It also means being Google's assistant to reach the two implementations' standardization constraint again... |
Firefox OS | 2011 | The idea of a linux+Gecko Operating System finally touched ground. 4 years later, the project is dead for mobile. |
Versioning System | 2011 | When Mozilla moved to faster releases for Firefox, large
corporations having slower deployment processes reacted quite
vocally. Mozilla replied it did not care about dinosaurs of the
past. More complaints led to ESR releases. |
Add-ons | 2015 | XUL-based add-ons have been one of the largest attractors to Firefox. AdBlock+ alone deserves kudos, but more globally, the power of XUL-based add-ons that could interact with the whole Gecko platform and all of Firefox's UI has been a huge market opener. In 2015/2016, Mozilla plans to ditch XUL-based add-ons without having a real replacement for them, feature-per-feature. |
Evangelism | 2015 | While Google and Microsoft have built first-class tech-evangelism teams, Mozilla made all its team flee in less than 18 months. I don't know (I really don't) the reason behind that intense bleeding but I read it as a very strong warning signal. |
Servo | 2016 | Servo is the new cool kid on the block. With parallel layout and a brand new architecture, it should allow new frontiers in the mobile world, finally unleashing the power of multicores. But instead of officially increasing the focus on Servo and decreasing the focus on Gecko, Gecko is going to benefit from Servo's rust-based components to extend its life. This is the old sustaining/disruptive paradigm from Clayton Christensen. |
(I hope I did not make too many mistakes in the table above. At least, that's my personal recollection of the events. If you think I made a mistake, please let me know and I'll update the article.)
Let's be clear then: Mozilla really succeeded only three times. First, with Firefox on the desktop. Second, enabling the Add-ons ecosystem for Firefox. Third, with its deals with large search engine providers. Most of the other projects and products were eventually ditched for lack of interest, misunderstanding, time-to-market and many other reasons. Mozilla is desperately looking for a fourth major opportunity, and that opportunity can only extend the success of the first one or be entirely different.
The market constraints I see are the following:
Given the assets and the skills, I see then only two strategic axes for Moz:
I won't discuss item 1. I'm not a US lawyer and I'm not even a lawyer. But for item 2, here's my idea:
That plan addresses:
There are no real competitors here. All other players in that field use a runtime that does not completely compile script to native, or are not based on Web Standards, or they're not really ubiquitous.
I wish the next-generation native source editor, the next-gen native Skype app, the next-gen native text processor, the next-gen native online and offline twitter client, the next native Faecbook app, the next native video or 3D scene editor, etc. could be written in html+CSS+ECMAScript and compiled to native and if they embed a browser, let be it a Mozilla browser if that's allowed by the platform.
As I wrote at the top of this post, you may find the above unfeasible, dead stupid, crazy, arrogant, expensive, whatever. Fine by me. Yes, as a strategy document, that's rather light w/o figures, market studies, cost studies, and so on. Absolutely, totally agreed. Only allow me to think out loud, and please do the same. I do because I care.
Updates:
Clarification: I'm not proposing to do semi-"compilation" of html `a la Apache Cordova. I suggest to turn a well chosen subset of ES2015 into really native app and that's entirely different.
http://www.glazman.org/weblog/dotclear/index.php?post/2016/02/08/Strategy
|
This Week In Rust: This Week in Rust 117 |
Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.
This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.
This week's edition was edited by: Vikrant and Andre.
121 pull requests were merged in the last week.
Cow::from
for Vec
and slices.#[repr(i32)]
for univariant enum.Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:
CommandExt::{exec, before_exec}
.Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:
std::os::*::raw
.retain_mut
to Vec
and VecDeque
.SharedSender
to std::sync::mpsc
that implements Sync
.impl
blocks to apply to the same type/trait.IndexAssign
trait that allows overloading "indexed assignment" expressions like a[b] = c
.kind=better_static
that is used to link static libraries by passing them to the linker...
pattern fragment in more contexts.If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.
Tweet us at @ThisWeekInRust to get your job offers listed here!
The week's Crate of the Week is roaring, the Rust version of Prof. D. Lemire's compressed bitmap data structure. I can personally attest that both the Rust and Java versions compare very favorably in both speed and size to other bit sets and are easy to use.
Thanks to polyfractal for the suggestion.
Submit your suggestions for next week!
http://this-week-in-rust.org/blog/2016/02/08/this-week-in-rust-117/
|
Mike Hommey: SSH through jump hosts, revisited |
Close to 7 years ago, I wrote about SSH through jump hosts. Twice. While the method used back then still works, Openssh has grown an new option in version 5.3 that allows it to be simplified a bit, by not using nc
.
So here is an updated rule, version 2016:
Host *+* ProxyCommand ssh -W $(echo %h | sed 's/^.*+//;s/^\([^:]*$\)/\1:22/') $(echo %h | sed 's/+[^+]*$//;s/\([^+%%]*\)%%\([^+]*\)$/\2 -l \1/;s/:\([^:+]*\)$/ -p \1/')
The syntax you can use to connect through jump hosts hasn’t changed compared to previous blog posts:
$ ssh login1%host1:port1+host2:port2 -l login2
$ ssh login1%host1:port1+login2%host2:port2+host3:port3 -l login3
$ ssh login1%host1:port1+login2%host2:port2+login3%host3:port3+host4:port4 -l login4
Logins and ports can be omitted.
Update: Add missing port to -W
flag when one is not given.
|
Jennifer Boriss: Onto New Challenges |
|