Nick Cameron: Rust for C++ programmers - part 9: destructuring pt2 - match and borrowing |
fn foo() {
let x = 7i;
let y = x; // x is copied
println!("x is {}", x); // OK
let x = box 7i;
let y = x; // x is moved
//println!("x is {}", x); // error: use of moved value: `x`
}
enum Enum1 {
Var1,
Var2,
Var3
}
fn foo(x: &Enum1) {
match *x { // Option 1: deref here.
Var1 => {}
Var2 => {}
Var3 => {}
}
match x {
// Option 2: 'deref' in every arm.
&Var1 => {}
&Var2 => {}
&Var3 => {}
}
}
enum Enum2 {
// Box has a destructor so Enum2 has move semantics.
Var1(Box),
Var2,
Var3
}
fn foo(x: &Enum2) {
match *x {
// We're ignoring nested data, so this is OK
Var1(..) => {}
// No change to the other arms.
Var2 => {}
Var3 => {}
}
match x {
// We're ignoring nested data, so this is OK
&Var1(..) => {}
// No change to the other arms.
&Var2 => {}
&Var3 => {}
}
}
match *x {
Var1(y) => {}
_ => {}
}
match x {
&Var1(y) => {}
_ => {}
}
fn bar(x: &Enum2, y: &Enum2) {
// Error: x and y are being moved.
// match (*x, *y) {
// (Var2, _) => {}
// _ => {}
// }
// OK.
match (x, y) {
(&Var2, _) => {}
_ => {}
}
}
fn baz(x: Enum2) {
match x {
Var1(y) => {}
_ => {}
}
}
http://featherweightmusings.blogspot.com/2014/07/rust-for-c-programmers-part-9.html
|
Marco Zehe: Quick tip: Add someone to circles on Google Plus using a screen reader |
In my “WAI-ARIA for screen reader users” post in early May, I was asked by Donna to talk a bit about Google Plus. Especially, she asked how to add someone to circles. Google Plus has learned a thing or two about screen reader accessibility recently, but the fact that there is no official documentation on the Google Accessibility entry page yet suggests that people inside Google are not satisfied with the quality of Google Plus accessibility yet, or not placing a high enough priority on it. That quality, however, has improved, so adding someone to one or more circles using a screen reader is not that difficult any more.
Note that I tested the below steps with Firefox 31 Beta (out July 22) and NVDA 2014.2. Other screen reader/browser combos may vary in the way they output stuff or switch between their virtual cursor and focus/forms modes.
Here are the steps:
The above steps also work on the menu button that you find if you opened the profile page of an individual person, not just in the list of suggested people, or any other list of people. The interaction is exactly the same.
Hope this helps you get around in Google Plus a bit more efficiently!
|
Kent James: Following Wikipedia, Thunderbird Could Raise $1,600,000 in annual donations |
What will it take to keep Thunderbird stable and vibrant? Although there is a dedicated, hard-working team of volunteers trying hard to keep Thunderbird alive, there has been very little progress on improvements since Mozilla drastically reduced their funding. I’ve been an advocate for some time that Thunderbird needs income to fulfill its potential, and that the best way to generate that income would be to appeal directly to its users for donations.
One internet organization that has done this successfully has been Wikipedia. How much income could Thunderbird generate if they received the same income per user as Wikipedia? Surely our users, who rely on Thunderbird for critical daily communications, are at least as willing to donate as Wikipedia users.
Estimates of income from Wikipedia’s annual fund raising drive to users are around $20,000,000 per year. Recently Wikipedia is reporting 11824 M pageviews per month and 5 pageviews per user. That results in a daily user count of 78 million users. Thunderbird by contrast has about 6 million daily users (using hits per day to update checks), or about 8% of the daily users of Wikipedia.
If Thunderbird were willing to directly engage users asking for donations, at the same rate per user as Wikipedia, there is a potential to raise $1,600,000 per year. That would certainly be enough income to maintain a serious team to move forward.
Wikipedia’s donation requests were fairly intrusive, with large banners at the top of all Wikipedia pages. When Firefox did a direct appeal to users early this year, the appeal was very subtle (did you even notice it?). I tried to scale the Firefox results to Thunderbird, and estimated that a similar subtle appeal might raise $50,000 – $100,000 per year in Thunderbird. That is not sufficient to make a significant impact. We would have to be willing to be a little intrusive, like Wikipedia, it we are going to be successful. This will generate pushback, as has Wikipedia’s campaign, so we would have to be willing to live with the pushback.
But is it really in the best interest of our users to spare them an annual, slightly intrusive appeal for donations, while letting the product that they depend on each day slowly wither away? I believe that if we truly care about our users, we will take the necessary steps to insure that we give them the best product possible, including undertaking fundraising to keep the product stable and vibrant.
|
Mark Surman: How do we get depth *and* scale? |
We want millions of people learning about the web everyday with Mozilla. The ‘why’ is simple: web literacy is quickly becoming just as important as reading, writing and math. By 2024, there will be more than 5 billion people on the web. And, by then, the web will shape our everyday lives even more than it does today. Understanding how the it works, how to build it and how to make it your own will be essential for nearly everyone.
The tougher question is ‘how’ — how do we teach the web with both the depth *and* scale that’s needed? Most people who tackle a big learning challenge pick one path of the other. For example, the educators in our Hive Learning Networks are focused on depth of learning. Everything the do is high touch, hands-on and focused on innovating so learning happens in a deep way. On the flip side, MOOCs have quickly shown what scale looks like, but they almost universally have high drop out rates and limited learning impact for all but the most motivated learners. We rarely see depth and scale go together. Yet, as the web grows, we need both. Urgently.
I’m actually quite hopeful. I’m hopeful because the Mozilla community is deeply focused on tackling this challenge head on, with people rolling up their sleeves to help people learn by making and organizing themselves in new ways that could massively grow the number of people teaching the web. We’re seeing the seeds of both depth and scale emerge.
This snapped into focus for me at MozFest East Africa in Kampala a few days ago. Borrowing from the MozFest London model, the event showcased a variety of open tech efforts by Mozilla and others: FirefoxOS app development; open data tools from a local org called Mountabatten; Mozilla localization; Firefox Desktop engineering; the work of the Ugandan National Information Technology Agency. It also included a huge Maker Party, with 200 young Ugandans showing up to learn and hack with Webmaker tools.
The Maker Party itself was impressive — pulled off well despite rain and limited connectivity. But what was more impressive was seeing how the Mozilla community is stepping up to plant the seeds of teaching the web at depth and scale, which I’d call out as:
Mentors: IMHO, a key to depth is humans connecting face to face to learn. We’ve set up a Webmaker Mentors program in the last year to encourage this kind of learning. The question has been: will people step up to do this kind of teaching and mentoring, and do it well? MozFest EA was promising start: 30 motivated mentors showed up prepared, enthusiastic and ready to help the 200 young people at the event learn the web.
Curriculum: one of the hard parts of scaling a volunteer-based mentor program is getting people to focus their teaching on the most important web literacy skills. We released a new collection of open source web literacy curriculum over the past couple of months designed to solve this problem. We weren’t sure how things would work out, I’d say MozFestEA is early evidence that curriculum can do a good job of helping people quickly understand what and how to teach. Here, each of the mentors was confidently and articulately teaching a piece of the web literacy framework using Webmaker tools.
Making as learning: another challenge is getting people to teach / learn deeply based on written curriculum. Mozilla focuses on ‘making by learning’ as a way past this — putting hands-on, project based learning at the heart of most of our Webmaker teaching kits. For example, the basic remix teaching kit gets learners quickly hacking and personalizing their favourite big brand web site, which almost always gets people excited and curious. More importantly: this ‘making as learning’ approach lets mentors adapt the experience to a learner’s interests and local context in real time. It was exciting to see the Ugandan mentors having students work on web pages focused on local school tasks and local music stars, which worked well in making the standard teaching kits come to life.
Clubs: mentors + curriculum + making can likely get us to our 2014 goal of 10,000 people around the world teaching web literacy with Mozilla. But the bigger question is how do we keep the depth while scaling to a much bigger level? One answer is to create more ’nodes’ in the Webmaker network and get them teaching all year round. At MozFest EA, there was a session on Webmaker Clubs — after school web literacy clubs run by students and teachers. This is an idea that floated up from the Mozilla community in Uganda and Canada. In Uganda, the clubs are starting to form. For me, this is exciting. Right now we have 30 contributors working on Webmaker in Uganda. If we opened up clubs in schools, we could imagine 100s or even 1000s. I think clubs like this is a key next step towards scale.
Community leadership: the thing that most impressed me at MozFestEA was the leadership from the community. San Emmanuel James and Lawrence Kisuuki have grown the Mozilla community in Uganda in a major way over the last couple of years. More importantly, they have invested in building more community leaders. As one example, they organized a Webmaker train the trainer event a few weeks before MozFestEA. The result was what I described above: confident mentors showing up ready to teach, including people other than San and Lawrence taking leadership within the Maker Party side of the event. I was impressed.This is key to both depth and scale: building more and better Mozilla community leaders around the world.
Of course, MozFestEA was just one event for one weekend. But, as I said, it gave me hope: it made be feel that the Mozilla community is taking the core building blocks of Webmaker shaping them into something that could have a big impact.
With Maker Party kicking off this week, I suspect we’ll see more of this in coming months. We’ll see more people rolling up their sleeves to help people learn by making. And more people organizing themselves in new ways that could massively grow the number of people teaching the web. If we can make happen this summer, much bigger things lay on the path ahead.
|
Gregory Szorc: Updates to firefoxtree Mercurial extension |
My Please Stop Using MQ post, has been generating a lot of interest for bookmark-based workflows at Mozilla. To make adoption easier, I quickly authored an extension to add remote refs of Firefox repositories to Mercurial.
There was still a bit of confusion and gripes about workflows that I thought it would be best to update the extension to make things more pleasant.
People wanted an ability to easy pull/aggregate the various Firefox trees without additional configuration to an hgrc file.
With firefoxtree, you can now hg pull central or hg pull inbound or hg pull aurora and it just works.
Pushing with aliases doesn't yet work. It is slightly harder to do in the Mercurial API. I have a solution, but I'm validating some code paths to ensure it is safe. This feature will likely appear soon.
Once people adopted unified repositories with heads from multiple repositories, they asked how they could quickly identify the heads of the pulled Firefox repositories.
firefoxtree now provides a hg fxheads command that prints a concise output of the commits constituting the heads of the Firefox repos. e.g.
$ hg fxheads 224969:0ec0b9ac39f0 aurora (sort of) bug 898554 - raise expected hazard count for b2g to 4 until they are fixed, a=bustage+hazbuild-only 224290:6befadcaa685 beta Tagging /src/mdauto/build/mozilla-beta 1772e55568e4 with FIREFOX_RELEASE_31_BASE a=release CLOSED TREE 224848:8e8f3ba64655 central Merge inbound to m-c a=merge 225035:ec7f2245280c fx-team fx-team/default Merge m-c to fx-team 224877:63c52b7ddc28 inbound Bug 1039197 - Always build js engine with zlib. r=luke 225044:1560f67f4f93 release release/default tip Automated checkin: version bump for firefox 31.0 release. DONTBUILD CLOSED TREE a=release
Please note that the output is based upon local-only knowledge.
People were complaining that bookmark-based workflows resulted in Mercurial trying to push multiple heads to a remote. This complaint stems from the fact that Mercurial's default push behavior is to find all commits missing from the remote and push them. This behavior is extremely frustrating for Firefox development because the Firefox repos only have a single head and pushing multiple heads will only result in a server hook rejecting the push (after wasting a lot of time transferring that commit data).
firefoxtree now will refuse to push multiple heads to a known Firefox repo before any commit data is sent. In other words, we fail fast so your time is saved.
firefoxtree also changes the default behavior of hg push when pushing to a Firefox repo. If no -r argument is specified, hg push to a Firefox repo will automatically remap to hg push -r .. In other words, we attempt to push the working copy's commit by default. This change establishes sensible default and likely working behavior when typing just hg push.
Within the next 48 hours, mach mercurial-setup should prompt to install firefoxtree. Until then, clone https://hg.mozilla.org/hgcustom/version-control-tools and ensure your ~/hgrc file has the following:
[extensions]
firefoxtree = /path/to/version-control-tools/hgext/firefoxtree
You likely already have a copy of version-control-tools in ~/.mozbuild/version-control-tools.
It is completely safe to install firefoxtree globally: the extension will only modify behavior of repositories that are clones of Firefox repositories.
http://gregoryszorc.com/blog/2014/07/16/updates-to-firefoxtree-mercurial-extension
|
Pete Moore: Weekly review 2014-07-16 |
Highlights
Last week build duty, therefore much less to report this week. I think we’ll have plenty to talk about though (wink wink).
l10n vcs sync was done by aki, and i posted my responses, and am writing up a patch which I hope to land in the next 24 hours. That will be l10n done.
I’ve been busy traiging queues too, and inviting people to meetings that I don’t attend myself, and cleaning up a lot of bugs (not just the triaging, but in general).
Today’s major incident was fallout from panda train 3 move - finally resolved now (yay). Basically, devices.json was out-of-date on the foopies. Disappointingly I thought to check devices.json, but did not consider it would be out-of-date on foopies, as I knew we’d been having lots of reconfigs every day. But for other reasons, the foopy updates were not performed (hanging ssh sessions when updating them) - so it took a while until this was discovered (by dustin!). In the meantime had to disable and enable > 250 pandas.
Other than that, working ferociously on finishing off vcs sync.
I think I probably updated 200 bugs this week! Was quite a clean up.
|
Fr'ed'eric Harper: Community Evangelist: Firefox OS developer outreach at MozCamp India |
At the end of June, I was in India to do a train the trainer session at MozCamp India. The purpose of the session Janet Swisher (first time we worked together, and I think we got a winning combo), and I delivered was to help Mozillians to become Community Evangelists. Our goal was to help them become part of our Technical Evangelist team: helping us inspiring and enabling developers in India to be successful with Firefox OS (we are starting with this technology because of the upcoming launch).
We would have been able to do a full day or more about developer outreach, but we only had three hours in which we shown the attendees how they can contribute, did a fun speaker idol and worked on their project plan. The contribution can be done at many levels, like public speaking, helping developers to build Firefox OS application, answering questions on StackOverflow, and more.
Since we had parallel tracks during our session, we gave it twice to give them the chance to assist to more than one track. For those who were there in the Saturday session, the following slides are the one we used:
I also recorded the session for those of you that would like to refresh your memory:
For the session on Sunday, we fixed some slides, and adapted our session to give us more time for the speaker idol as the project plan. Here are the slides:
If you were not there, I would suggest you to follow the slides as the video of the second day, as it’s an improve version of the first one (not that the first one was not good, but it was the first time we gave this session);
From the feedback we got, it was a pretty good session, and we were happy so see the excitement of the Indian community about this community evangelist role. I can’t wait to see what the Mozilla community in India will do! If you too, Mozillian or not, have any interest about evangelizing the open web, you should join the Mozilla Evangelism mailing list.
--
Community Evangelist: Firefox OS developer outreach at MozCamp India is a post on Out of Comfort Zone from Fr'ed'eric Harper
Related posts:
|
Luis Villa: Designers and Creative Commons: Learning Through Wikipedia Redesigns |
tl;dr: Wikipedia redesigns mostly ignore attribution of Wikipedia authors, and none approach the problem creatively. This probably says as much or more about Creative Commons as it does about the designers.
disclaimer-y thing: so far, this is for fun, not work; haven’t discussed it at the office and have no particular plans to. Yes, I have a weird idea of fun.
It is no longer surprising when a new day brings a new redesign of Wikipedia. After seeing one this weekend with no licensing information, I started going back through seventeen of them (most of the ones listed on-wiki) to see how (if at all) they dealt with licensing, attribution, and history. Here’s a summary of what I found.
Perhaps not surprisingly, many designers completely remove attribution (i.e., history) and licensing information in their designs. Seven of the seventeen redesigns I surveyed were in this camp. Some of them were in response to a particular, non-licensing-related challenge, so it may not be fair to lump them into this camp, but good designers still deal with real design constraints, and licensing is one of them.
The history link is important, because it is how we honor the people who wrote the article, and comply with our attribution obligations. Five of the seventeen redesigns lacked any licensing information, but at least kept a history link.
Several of this group included some legal information, such as links to the privacy policy, or in one case, to the Wikimedia Foundation trademark page. This suggests that our current licensing information may be presented in a worse way than some of our other legal information, since it seems to be getting cut out even by designers who are tolerant of some of our other legalese?
Four of the seventeen designs keep the same old legalese, though one fails to comply by making it impossible to get to the attribution (history) page. Nothing wrong with keeping the existing language, but it could reflect a sad conclusion that licensing information isn’t worth the attention of designers; or (more generously) that they don’t understand the meaning/utility of the language, so it just gets cargo-culted around. (Credit to Hamza Erdoglu , who was the only mockup designer who specifically went out of his way to show the page footer in one of his mockups.)
Of the seventeen sites I looked at, exactly one did something different: Wikiwand. It is pretty minimal, but it is something. The one thing: as part of the redesign, it adds a big header/splash image to the page, and then adds a new credit specifically for the author of the header/splash image down at the bottom of the page with the standard licensing information. Arguably it isn’t that creative, just complying with their obligations from adding a new image, but it’s at least a sign that not everyone is asleep at the wheel.
This is surely not a large or representative sample, so all my observations from this exercise should be taken with a grain of salt. (They’re also speculative since I haven’t talked to the designers.) That said, some thoughts besides the ones above:
Postscript, added next morning:
I think it’s important to stress that I didn’t link to the individual sites here, because I don’t want to call out particular designers or focus on their failures/oversights. The important (and as I said, sad) thing to me is that designers are, historically, a culture concerned with licensing and attribution. If we can’t interest them in applying their design talents to our problem, in the context of the world’s most famously collaborative project, we (lawyers and other Commoners) need to look hard at what we’re doing, and how we can educate and engage designers to be on our side.
I should also add that the WMF design team has been a real pleasure to work with on this problem, and I look forward to doing more of it. Some stuff still hasn’t made it off the drawing board, but they’re engaged and interested in this challenge. Here is one example.
http://lu.is/blog/2014/07/15/designers-and-creative-commons-learning-through-wikipedia-redesigns/
|
Byron Jones: happy bmo push day! |
switching the default monospace font on bmo yesterday highlighted a few issues with the fira-mono typeface that we’d like to see addressed before we use it. as a result comments are now displayed using their old font.
the following changes have been pushed to bugzilla.mozilla.org:
discuss these changes on mozilla.tools.bmo.
http://globau.wordpress.com/2014/07/16/happy-bmo-push-day-104/
|
Jess Klein: The first 6 weeks of Hive Labs |
http://jessicaklein.blogspot.com/2014/07/the-first-6-weeks-of-hive-labs.html
|
Armen Zambrano Gasparnian: Developing with GitHub and remote branches |
git clone git@github.com:mozilla-b2g/fxos-certsuite.git
# Time passes
# To develop something on master
# Pull in all the new commits from master
git fetch origin
# Create a new branch (this will track master from origin,
# which we don't really want, but that will be fixed later)
git checkout -b my_new_thing origin/master
# Edit some stuff
# Stage it and then commit the work
git add -p
git commit -m "New awesomeness"
# Push the work to a remote branch
git push --set-upstream origin HEAD:jgraham/my_new_thing
# Go to the GH UI and start a pull request
# Fix some review issues
git add -p
git commit -m "Fix review issues" # or use --fixup
# Push the new commits
git push
# Finally, the review is accepted
# We could rebase at this point, however,
# we tend to use the Merge button in the GH UI
# Working off a different branch is basically the same,
# but you replace "master" with the name of the branch you are working off.
http://armenzg.blogspot.com/2014/07/developing-with-github-and-remote.html
|
Alon Zakai: Massive, a new work-in-progress asm.js benchmark - feedback is welcome! |
http://mozakai.blogspot.com/2014/07/massive-new-work-in-progress-asmjs.html
|
Gregory Szorc: Python Packaging Do's and Don'ts |
Are you someone who casually interacts with Python but don't know the inner workings of Python? Then this post is for you. Read on to learn why some things are the way they are and how to avoid making some common mistakes.
It is an easy trap to view virtualenvs as an obstacle, a distraction towards accomplishing something. People see me adding virtualenvs to build instructions and they say I don't use virtualenvs, they aren't necessary, why are you doing that?
A virtualenv is effectively an overlay on top of your system Python install. Creating a virtualenv can be thought of as copying your system Python environment into a local location. When you modify virtualenvs, you are modifying an isolated container. Modifying virtualenvs has no impact on your system Python.
A goal of a virtualenv is to isolate your system/global Python install from unwanted changes. When you accidentally make a change to a virtualenv, you can just delete the virtualenv and start over from scratch. When you accidentally make a change to your system Python, it can be much, much harder to recover from that.
Another goal of virtualenvs is to allow different versions of packages to exist. Say you are working on two different projects and each requires a specific version of Django. With virtualenvs, you install one version in one virtualenv and a different version in another virtualenv. Things happily coexist because the virtualenvs are independent. Contrast with trying to manage both versions of Django in your system Python installation. Trust me, it's not fun.
Casual Python users may not encounter scenarios where virtualenvs make their lives better... until they do, at which point they realize their system Python install is beyond saving. People who eat, breath, and die Python run into these scenarios all the time. We've learned how bad life without virtualenvs can be and so we use them everywhere.
Use of virtualenvs is a best practice. Not using virtualenvs will result in something unexpected happening. It's only a matter of time.
Please use virtualenvs.
Do you use sudo to install a Python package? You are doing it wrong.
If you need to use sudo to install a Python package, that almost certainly means you are installing a Python package to your system/global Python install. And this means you are modifying your system Python instead of isolating it and keeping it pristine.
Instead of using sudo to install packages, create a virtualenv and install things into the virtualenv. There should never be permissions issues with virtualenvs - the user that creates a virtualenv has full realm over it.
On some systems, such as OS X with Homebrew, you don't need sudo to install Python packages because the user has write access to the Python directory (/usr/local in Homebrew).
For the reasons given above, don't muck around with the system Python environment. Instead, use a virtualenv.
Your system's package manager (apt, yum, etc) is likely using root and/or installing Python packages into the system Python.
For the reasons given above, this is bad. Try to use a virtualenv, if possible. Try to not use the system package manager for installing Python packages.
Python packaging has historically been a mess. There are a handful of tools and APIs for installing Python packages. As a casual Python user, you only need to know of one of them: pip.
If someone says install a package, you should be thinking create a
virtualenv, activate a virtualenv, pip install
. You
should never run pip install
outside of a virtualenv. (The exception
is to install virtualenv and pip itself, which you almost certainly want
in your system/global Python.)
Running pip install will install packages from PyPI, the Python Packaging Index by default. It's Python's official package repository.
There are a lot of old and outdated tutorials online about Python packaging. Beware of bad content. For example, if you see documentation that says use easy_install, you should be thinking, easy_install is a legacy package installer that has largely been replaced by pip, I should use pip instead. When in doubt, consult the Python packaging user guide and do what it recommends.
The more Python programming you do, the more you learn to not trust the Python package provided by your system / package manager.
Linux distributions such as Ubuntu that sit on the forward edge of versions are better than others. But I've run into enough problems with the OS or package manager maintained Python (especially on OS X), that I've learned to distrust them.
I use pyenv for installing and managing Python distributions from source. pyenv also installs virtualenv and pip for me, packages that I believe should be in all Python installs by default. As a more experienced Python programmer, I find pyenv just works.
If you are just a beginner with Python, it is probably safe to ignore this section. Just know that as soon as something weird happens, start suspecting your default Python install, especially if you are on OS X. If you suspect trouble, use something like pyenv to enforce a buffer so the system can have its Python and you can have yours.
Now that you know the preferred way to interact with Python, you are probably thinking oh crap, I've been wrong all these years - how do I fix it?
The goal is to get a Python install somewhere that is as pristine as possible. You have two approaches here: cleaning your existing Python or creating a new Python install.
To clean your existing Python, you'll want to purge it of pretty much all packages not installed by the core Python distribution. The exception is virtualenv, pip, and setuptools - you almost certainly want those installed globally. On Homebrew, you can uninstall everything related to Python and blow away your Python directory, typically /usr/local/lib/python*. Then, brew install python. On Linux distros, this is a bit harder, especially since most Linux distros rely on Python for OS features and thus they may have installed extra packages. You could try a similar approach on Linux, but I don't think it's worth it.
Cleaning your system Python and attempting to keep it pure are ongoing tasks that are very difficult to keep up with. All it takes is one dependency to get pulled in that trashes your system Python. Therefore, I shy away from this approach.
Instead, I install and run Python from my user directory. I use pyenv. I've also heard great things about Miniconda. With either solution, you get a Python in your home directory that starts clean and pure. Even better, it is completely independent from your system Python. So if your package manager does something funky, there is a buffer. And, if things go wrong with your userland Python install, you can always nuke it without fear of breaking something in system land. This seems to be the best of both worlds.
Please note that installing packages in the system Python shouldn't be evil. When you create virtualenvs, you can - and should - tell virtualenv to not use the system site-packages (i.e. don't use non-core packages from the system installation). This is the default behavior in virtualenv. It should provide an adequate buffer. But from my experience, things still manage to bleed through. My userland Python install is extra safety. If something wrong happens, I can only blame myself.
Python's long and complicated history of package management makes it very easy for you to shoot yourself in the foot. The long list of outdated tutorials on The Internet make this a near certainty for casual Python users. Using the guidelines in this post, you can adhere to best practices that will cut down on surprises and rage and keep your Python running smoothly.
http://gregoryszorc.com/blog/2014/07/15/python-packaging-do's-and-don'ts
|
Darrin Henein: Side Tabs: Prototyping An Unexpected Productivity Hack |
A few months ago, I came across an interesting Github repo authored by my (highly esteemed!) colleague Vlad Vukicevic called VerticalTabs. This is a Firefox add-on which moves your tabs, normally organized horizontally along the top of your browser, to a vertical list docked to either the left or right side of the window. Vlad’s add-on worked great, but I saw a few areas where a small amount of UX and visual design love could make this something I’d be willing to trial. So, I forked.
After cloning the repo, I spent a couple days modifying some of the layout, adding a new dark theme to the CSS, and replaced a handful of the images and icons with my own. Ultimately, it was probably a single-digit number of hours spent to get my code to a place that I was happy with. Certainly, there are some issues on certain operating systems, and things like Firefox’s pinned tabs don’t get the treatment I would love them to have, but that was not the point. The point of my experiment was to learn.
Learn? Learn what?
Let’s step back for a moment. Here at Mozilla, we like to experiment. Hack, Play, Make… whatever you’d like to call it. But we don’t like to waste time: we do things with purpose. If we build something, we try to make sure it’s worth the time and effort involved. As a Design Engineer on the UX team, I (along with others) work hard to bring and make clear the value of prototyping to my colleagues. What is the minimal thing we can make to test our assumptions? The reality is that when designing digital products, how it works is equally (arguably more) important than how it looks. Steve Jobs said it best:
Design is how it works.
Let’s bring it back to Side Tabs now (I’ll be using Side Tabs and VerticalTabs interchangeably). The hypothesis I was hoping to validate was that there was a subset of the Firefox user base that would find value in the layout that Side Tabs enabled. I wanted to bring this add-on to a level where users would find it delightful and usable enough to at least give it a fair shot.
It’s critically important that before you unleash your experiment and start learning from it, you mitigate (as much as possible) any sources of bias or false-negatives. Make (or fake) it to a point where the data you collect is not influenced by poor design, conflated features, or poor performance/usability. It turned out that this delta, from Vlad’s work to my own version, was a small amount of work. I went for it, pushed it a few steps in the right direction, and shared it with as many people as I could.
I want to restate: the majority of the credit for my particular version of VerticalTabs goes to those who published the work on top of which I built, namely Vlad and Philipp von Weitershausen. Furthermore, the incredibly talented Stephen Horlander has explored the idea of Side Tabs as well, and you will notice his work helped inspire the visual language I used in my implementation. This is how great things are built; rarely from scratch, but more commonly on the shoulders and brilliance of those who came before you.
My Github repo (at time of writing) has 13 stars and is part of a graph with 19 forks. Similarly, I’ve had colleagues fork my repo and take it further, adding improvements and refinements as they go (see my colleague Hayden’s work for one promising effort). I’ve had great response on Twitter from developers and users who love the add-on and who can’t wait to share their ideas and thoughts. It’s awesome to see ideas take shape and grow so organically like this. This is collaboration.
I’ve been using Side Tabs full-time in my default browser (Firefox Nightly) for 5 or 6 months now, and I’ve learned a ton. Aside from now preferring a horizontal layout (made possible by stacking tabs vertically) on a screen pressed for vertical space, I’ve discovered a use case that I never would have imagined had I simply mocked this idea up in Photoshop.
I use productivity tools heavily, from calendars to to-do lists and beyond. One common scenario is this: I click on a link, and it’s something I find interesting or valuable, but I don’t want to address it right now. I’ve experimented with Pocket (I still use this for longer form writing I wish to read later) but find that most of my Read Later lists are Should-but-Never-Actually-Read-Later lists. Out of sight, out of mind, right?
The Firefox UX Team has actually done some great research on Save for Later behaviour. There is a great blog post here as well as a more detailed research summary here.
By a quite pleasant surprise, the vertical layout of Side Tabs surfaced a solution to me. I found myself appropriating my tab list into a priority-stack , always giving my focus to the tab at the bottom of the list. When I open something I want to keep around, I simply drag it in the list to a spot based on its relative importance; right to the top for ‘Someday’ items, 2nd or 3rd from the bottom if I want to take a peek once I’m done my task at hand (which is always the bottom tab). I’ve even moved to having two Firefox windows open, which essentially gives me two separate task lists: one for personal tabs/to-dos and one for work.
So where does this leave us? Quite clearly, it’s shown the immediate value of investing in an interactive prototype versus using only static mockups: people can use the design, see how it works, and in this case, expose a usage pattern I hadn’t seen before. The most common argument against prototyping is the cost involved (time, chiefly), and in my experience the value of building your designs (to varying levels of fidelity, based on the project/hypothesis) always outweighs the cost. Building the design sheds light on design problems and solutions that traditional, static mockups often fail to illuminate.
With regards to Side Tabs itself, I learned that in some cases, users treat their tabs as tasks to accomplish, and when a task is completed, it’s tab is closed. Increasingly so, our work and personal tasks exist online (email, banking, shopping, etc.), and are accessed through the browser. Some tasks (tabs) have higher priority or urgency than others, and whether visible or not, there is an implicit order by which a user will attend to their tabs. Helping users better organize their tabs made using the browser a more productive, delightful experience. And anything that can make an experience more delightful or useful is not only of great value and importance to the product or team I work with, but also to me as a designer.
Get the Add-on Side Tabs on GithubSide Tabs was built in Javascript, CSS and HTML. Knowing how to code, prototype and build the designs I create has been the advantage that has allowed me to excel as a UX designer.
For updates and access to my best content, blog posts and tutorials about designing with code, join my mailing list below!
|
Christian Heilmann: Maker Party 2014 – go and show off the web to the next makers |
Today is the start of this year’s Maker Party campaign. What’s that? Check out this video.
Maker Party is Mozilla’s annual campaign to teach the culture, mechanics and citizenship of the web through thousands of community-run events around the world from July 15-September 15, 2014.This week, Maker Party events in places like Uganda, Taiwan, San Francisco and Mauritius mark the start of tens of thousands of educators, organizations and enthusiastic web users just like you getting together to teach and learn the web.
You can join Maker Party by finding an event in your area and learning more about how the web works in a fun, hands-on way with others. Events are open to everyone regardless of skill level, and almost all are free! Oh, and there will be kickoff events in all the Mozspaces this Thursday—join in!
No events in your area? Why not host one of your own? Maker Party Resources provides all the information you need to successfully throw an event of any size, from 50+ participants in a library or hackerspace to just you and your little sister sitting on the living room sofa.
Go teach the web, believe me, it is fun!
http://christianheilmann.com/2014/07/15/maker-party-2014-go-and-show-off-the-web-to-the-next-makers/
|
Fr'ed'eric Harper: One year at Mozilla |
On July 15th last year, I was starting a new job at Mozilla: it was the beginning of a new journey. Today, it’s been one year that I’m a Mozillian, and I’m proud.
One year later, I’m still there. It means I like what I’m doing, my team, my manager, and the company. It has been an interesting, but amazing year. I always say that my job is to give love to developers, and it’s true. I’m fortunate enough to have a job where I can share my passion with other, and being paid to help them. During the last year, I spoke at 26 events (conferences, user groups…) sharing about technology and educating developer about open web app like Firefox OS. I’ve helped many developers to fix their bugs, create their applications, provide a better experience to their users, solve the issue they had, and even more important, be successful on the platform.
I’ve always been energized by the fact that the line between working, and having fun for me is really thin, but the volunteers I meet stoked me. The passion, the energy, the time they give to Mozilla, or should I say, to get a better Web, an open one, and help people to take ownership of that web, is astonishing. I will always remember the events I’ve done with them! There is no way you can’t be pumped up for your work, when you see those people giving their time and being dedicated 100% to the mission like that. To all Mozillians, I salute you, thanks for being part of my life!
I can’t write a post about my first year at Mozilla without talking about the travels: I’ve been on the road for 104 days in 15 cities (Toronto, Krakow, San Jose, Brussels, Guadalajara, Budapest, Athens, San Francisco, Moutain View, Barcelona, Paris, Prague, London, Bangalore, and Mumbai) from 12 countries (Canada, Poland, USA, Belgium, Mexico, Hungary, Greece, Spain, France, Czech Republic, UK, and India). For someone who like to discover new cities, cultures, foods, and more, travelling for work is an amazing bonus.
I’ve been a Technical Evangelist for three years, and a half now. I’ve not been in this role for a decade, but it’s not something new for me, I have some experience. Still, I learn a lot in the last year, and it’s perfect as I’m one of the kinds who think we should never stop learning, and improving ourselves. For now, I would not like to be in another position…
The biggest learning curve for me was about the organization, or should I say, the company. Mozilla is a particular beast, a strange one. As far as I know, no other company can be compared to Mozilla: it’s unique. No one can be against the mission of Mozilla, and all the Mozillians move forward to make the web even more open. We are working on amazing projects that changed, and will continue to change the world. We are a bunch of passionate people who believe in what we do, and for any enterprise, it’s a definite asset. We can go, and do what other are afraid to do as we are not there to make money (even if we need money to survive). It’s crazy what all Mozilla together can accomplish.
On the other side, Mozilla is cannibalizing itself. We are getting bigger, and bigger, but we are not always well organized. Because of the nature of Mozilla, everybody has, and wants to give their opinion, and some people tend to forget that it’s their job. The industry has higher expectations for us. We are pro open source, and open web, but we are not always pragmatic. We need volunteers to be successful, but we tend to accept everybody, when we should aim for quality instead of quantity. At the same time, we have so many projects we are working on: it’s not just about Firefox or Firefox OS my friends. Don’t get me wrong, I’m not complaining as I love Mozilla. I guess that it’s part of my reflexion on the last year of my professional life. We are getting better at organizing ourselves, and I hope it will continue that way as I want Mozilla to be the protector of the web for many more years to come!
Today is the first day of my next year at Mozilla, and I’m looking forward to many more!
--
One year at Mozilla is a post on Out of Comfort Zone from Fr'ed'eric Harper
Related posts:
|
Andreas Gal: Improving JPEG image encoding |
Images are a big proportion of the data that browsers load when displaying a website, so better image compression goes a long way towards displaying content faster. Over the last few years there has been debate on whether a new image format is needed over the ubiquitous JPEG to provide better image data compression.
We published a study last year which compares JPEG with a number of more recent image formats, including WebP. Since then, we have expanded and updated that study. We did not find that WebP or any other royalty-free format we tested offers sufficient improvements over JPEG to justify the high maintenance cost of adding a new image format to the Web.
As an alternative we recently started an effort to improve the state of the art of JPEG encoders. Our research team released version 2.0 of this enhanced JPEG encoder, mozjpeg today. mozjpeg reduces the size of both baseline and progressive JPEGs by 5% on average, with many images showing significantly larger reductions.
Facebook announced today that they are testing mozjpeg 2.0 to improve the compression of images on facebook.com. It has also donated $60,000 to contribute to the ongoing development of the technology, including the next iteration, mozjpeg 3.0.
“Facebook supports the work Mozilla has done in building a JPEG encoder that can create smaller JPEGs without compromising the visual quality of photos,” said Stacy Kerkela, software engineering manager at Facebook. “We look forward to seeing the potential benefits mozjpeg 2.0 might bring in optimizing images and creating an improved experience for people to share and connect on Facebook.”
mozjpeg improves image encoding while maintaining full backwards compatibility with existing JPEG decoders. This is very significant because any browser can immediately benefit from these improvements without having to adopt new image formats, such as WebP.
The JPEG format continues to evolve along with the Web, and mozjpeg 2.0 will make it easier than ever for users to enjoy those images. Check out the Mozilla Research blog post for all the details.
http://andreasgal.com/2014/07/15/improving-jpeg-image-encoding/
|
Mozilla Open Policy & Advocacy Blog: Mozilla Submits Comments on FCC Net Neutrality Proposal |
Today, Mozilla is filing comments in response to the first of two major deadlines set out by the U.S. Federal Communications Commission (FCC) for its latest net neutrality proposal. The FCC describes these rules as a means of “protecting and promoting the open Internet,” and we are encouraging the FCC to stay true to that ideal.
The FCC’s initial proposal offers weak rules, based on fragile “Title I” authority. The proposal represents a significant departure from current law and precedent in this space by expanding on a new area of authority without establishing clear limits. This approach makes it likely that it will be overturned on appeal.
Our comments, like our earlier Petition, urge the FCC to change course from its proposed path, and instead use its “Title II” authority as a basis for real net neutrality protections. We recommended that the FCC modernize the agency’s approach to how Internet Service Providers (ISPs) provide Internet access service. Specifically, we asked the agency to define ISPs’ offerings to edge providers – companies like Dropbox and Netflix that offer valuable services to Internet users – as a separate service. We explained why such a service would need to fall under “Title II” authority, and how in using that basis, the FCC can adopt effective and enforceable rules prohibiting blocking, discrimination, and paid prioritization online, to protect all users, both wired and wireless.
In addition to reiterating support for Title II remote delivery classification, today’s comments address some questions that arose about our initial proposal over the past two months, such as:
• How the Mozilla petition addresses interconnection,
• How forbearance would work,
• How the services we describe can be “services” without direct payment, and
• How the FCC can prohibit paid prioritization under Title II.
Our comments also articulate our views on net neutrality rules:
• A clean rule prohibiting blocking is the most workable and sustainable approach, rather than complex level of service standards;
• Prohibiting unreasonable discrimination is more effective than weaker alternatives such as “commercially unreasonable practices”;
• Paid prioritization inherently degrades the open Internet; and
• Mobile access services should have the same protections as fixed.
Mozilla will continue engaging closely with policymakers and stakeholders on this issue, and we encourage you to make your voice heard as well, before the next deadline for reply comments on September 10th. Here are some easy ways to contact the FCC and members of Congress and tell them to take the necessary steps to protect net neutrality and all Internet users and developers.
|
Rizky Ariestiyansyah: Webmaker Party Starts today! Hai Indonesia |
http://oonlab.com/webmaker-party-starts-today-hai-indonesia.onto
|
Byron Jones: happy bmo push day! |
the following changes have been pushed to bugzilla.mozilla.org:
discuss these changes on mozilla.tools.bmo.
http://globau.wordpress.com/2014/07/15/happy-bmo-push-day-103/
|