Mike Hoye: Switching Sides |
I’ve been holding off on a laptop refresh at work for a while, but it’s time. The recent Apple events have been less than compelling; I’ve been saying for a long time that Mozilla needs more people in-house living day to day on Windows machines and talk is cheaper than ever these days, so.
I’m taking notes here of my general impressions as I migrate from a Macbook Pro to a Surface Book and Windows 10.
I’ll add to them as things progress, but for now let’s get started.
The thing I can’t figure out here is the organizational metaphor. Apple has managed to make four-fingered swiping around multiple desktop feel like I’m pushing stuff around a physical space, but Windows feels like I’m using a set of memorized gestures to navigate a phone tree. This is a preliminary impression, but it feels like I’m going to need to just memorize this stuff.
Anyway, this is where I am so far. More notes as I think of them.
Update:
|
Eitan Isaacson: Pain Management |
Close your eyes, take a deep breath and fast forward four years when Trump’s administration will be roundly rejected. Feel in your body the hope that will overwhelm you. Conjure up your future restored faith in humanity when people from all walks of life stand together against hate.
I have never felt so pessimistic and defeated than in the last week. The press will normalize him, the Democratic minority will indulge him, and voters will grow apathetic and disengaged. No matter the scandals, past and future, that will embroil Trump and his goons; they will continue to consolidate power. He can’t be “exposed” for who he really is, it has been in plain sight all along.
Trump and his kind will not go away, and the establishment is not coming back to rescue us. Cory Booker, Joe Biden, Michelle Obama, or any other Democrat star will not pull us out of this tailspin. They will try to convince us, as Hillary tried, that the electorate will embrace a competent centrist. They won’t, and the right will only grow in influence. This nightmare can endure for decades.
The only thing that will save us from a populist racist oligarch demagogue is a populist anti-racist anti-neoliberal progressive with a mobilized movement behind them and serious contenders up and down the ticket.
Before Trump got elected, we already had our work cut out for us: Black Lives Matter, equitable health care, reproductive rights, equal pay, housing justice, clean water, criminal justice, equitable education, prison abolition, gender justice, free Palestine, voting rights, indigenous rights, refugee rights, migrant rights, campaign finance reform, friggin’ climate change and climate justice.
We don’t get to put those issues aside until the next Obama is elected and we clear up our heads. We *amplify* those struggles and use their leverage to restore democracy and propel us forward to a revolution. We don’t have four years, we need to have our ducks in a row for the midterms in two.
Am I being too preachy? I’m sorry. This is my catharsis. It’s the take-charge method Penny Simkin recently taught us in class.
There are countless people who’s security and future are called into question with this turn of events. If you, like me, are shrouded in privilege – don’t let it paralyze you. Don’t be an ally, be an actor. Own this struggle. I would bring up that famous Murri quote, but you already know it.
I know many smart, strategic, and dedicated people who work on this stuff every day, and I am so humbled and thankful they do this work and the sacrifices they make.
We are anticipating a wonderful life transition soon (more on that later?), as the dust settles from this election and as we find our stride as a family I look forward to working on our liberation with you all.
Close your eyes, take a deep breath, and borrow just a little bit of the hope and restored faith you will feel in four years.
|
Air Mozilla: Mozilla Weekly Project Meeting, 14 Nov 2016 |
The Monday Project Meeting
https://air.mozilla.org/mozilla-weekly-project-meeting-20161114/
|
Daniel Stenberg: I have toyota corola |
Modern cars have fancy infotainment setups, big screens and all sorts of computers with networked functionality built-in. Part of that fanciness is increasingly often a curl install. curl is a part of the standard GenIVI and Tizen offers for cars and is used in lots of other independent software installs too.
This usually affects my every day very little. Sure I’m thrilled over hundreds of millions of more curl installations in the world but the companies that ship them don’t normally contact me and curl is a really stable product by now so not a lot of them speak up on the issue trackers or mailing lists either (or if they do, they don’t tell us where they come from or what they’re working on).
The main effect is that normal end users find my email address via the curl license text in products in cars to a higher degree. They usually find it in the about window or an open source license listing or similar. Often I suspect my email address is just about the only address listed.
This occasionally makes desperate users who have tried everything to eventually reach out to me. They can’t fix their problem but since my email exists in their car, surely I can!
Here are three of my favorite samples that I saved.
Hello sir
I have Avalon 2016
Regarding the audio player, why there delay between audio and video when connect throw Bluetooth and how to fix it.
Hello,I am using in a new Ford Mondeo the navigation system with SD Card FM5T-19H449-FC Europe F4.I can read the card but not write on it. I want to add to the card some POI's. Can you help me to do it?
Hello
I have toyota corola with multimedya system that you have its copyright.
I need a advice to know how to use the gps .
Now i cant use or see maps.
And i want to know how to add hebrew leng.
I’m sad to say that I rarely respond at all. I can’t help them and I’ve learned over the years that just trying to explain how I have nothing to do with the product they’re using is often just too time consuming and energy draining to be worth it. I hope these people found the answers to the problems via other means.
The hacker news discussions on this post took off. I just want to emphasize that this post is not a complaint. I’m not whining over this. I’m just showing some interesting side-effects of my email in the license text. I actually find these emails interesting, sometimes charming and they help me connect to the reality many people experience out there.
https://daniel.haxx.se/blog/2016/11/14/i-have-toyota-corola/
|
Myk Melez: Why Embedding Matters |
Lately, I’ve been thinking about what a new embedding strategy for Mozilla might look like. Mozilla has a great deal of history with embedding, and Gecko has long been (and still is) used in a variety of products besides Firefox. But lately the organization hasn’t prioritized embedding, and the options for it have dwindled.
Nevertheless, embedding still matters for Mozilla’s primary rendering engine, including the recently-announced Quantum, because it provides the “web compatibility defense” of expanded and diverse market share.
The more the engine is used in the world, and the more familiar web developers are with it, the more they’ll consider it (and web compatibility generally) when designing and building web applications.
Embedding also matters because users of web software (like a web browser) benefit from a fast and secure rendering engine with a user-friendly feature set, whether or not that software is provided by Mozilla.
Mozilla can mediate the Web most directly with Firefox itself, but it’ll never be the only provider of web software, and it can extend its influence (albeit with less control over the experience) by enabling other developers to reuse its engine in their products.
Finally, embedding matters because open source software components benefit from reuse, which increases project participation (bug reports and fixes, ports, market research data, etc.) and improves those components for all their consumers, including their primary/original ones.
“This technology could fall into the right hands.”
Over the next few weeks, I’ll blog more about the kinds of use cases an embedding strategy might address, and the kinds of projects that might satisfy those use cases.
Note that this is my personal blog (although I’m a Mozilla employee), and nothing here should be construed to represent official Mozilla plans and priorities.
|
Myk Melez: Syndicating to Medium |
I’ve been experimenting with syndicating my blog posts to Medium. While I appreciate the syndicated, webby nature of the blogosphere, Medium has an appealing sense of place. It reminds me of the old Open Salon. And I’m curious how my posts will play there.
If you’re curious too, this post should link to its Medium equivalent—at least if you’re reading it on my blog, rather than Planet or another syndicator. Otherwise, you can find my posts and follow me on my Medium profile.
|
Robert O'Callahan: Handling Hardware Lock Elision In rr |
Intel's Hardware Lock Elision feature lets you annotate instructions with prefixes to indicate that they perform lock/unlock operations. The CPU then turns those into hardware memory transactions so that the instructions in the locked region are performed speculatively and only committed at the unlock. The difference between HLE and the more capable RTM transactional memory support is that HLE is supposed to be fully transparent. The prefixes are ignored on non-HLE-supporting CPUs so you can just add them to your code and things will hopefully get faster --- no CPUID checks are necessary. Unfortunately, by default, Intel's hardware performance counters count events in aborted transactions, even though they didn't really happen in terms of user-space effects. Thus when rr records a program that uses HLE, our conditional branch counter may report a value higher than the number of conditional branches that "really" executed, and this breaks rr. (FWIW we discovered this problem when Emilio was using rr to debug intermittent failures in Servo using the latest version of the Rust parking_lot crate.)
For RTM we have some short-term hacks to disable RTM usage in glibc, and the medium-term solution is to use "CPUID faulting" to trap CPUID and modify the feature bits to pretend RTM is not supported. This approach doesn't work for HLE because there is no need to check CPUID before using it.
Fortunately Intel provides an IN_TXCP flag that you can set on a hardware performance counter to indicate that it should not count events in aborted transactions. This is exactly what we need. However, for replay we need to be able to program the PMU to send an interrupt after a certain number of events have occurred, and the Linux kernel prevents us from doing that for IN_TXCP counters. Apparently that's because if you request an interrupt after a small number of events and then execute an HLE transaction that generates more than that number of events, the CPU will detect the overflow, abort the transaction, roll the counter back to its pre-transaction value, then the kernel notices there wasn't really an overflow, restarts the transaction, and you're in an infinite loop.
The solution to our dilemma is to use two counters to count conditional branches. One counter is used to generate interrupts, and it is allowed to count events in aborted transactions. Another counter uses IN_TXCP to avoid counting events in aborted transactions, and we use this counter only for measurement, never for generating interrupts. This setup works well. It means that during replay our interrupt might fire early, because the interrupt counter counted events in aborted transactions, but that's OK because we already have a mechanism to carefully step forward to the correct stopping point.
There is one more wrinkle. While testing this new approach I noticed that there are some cases where the IN_TXCP counter reports spurious events. This is obviously a nasty little bug in the hardware, or possibly the kernel. On my system you can reproduce it just by running perf stat -e r5101c4 -e r2005101c4 ls --- the second event is just the IN_TXCP version of the first event (retired conditional branches), so should always report counts less than or equal to the first event, but I get results like
Performance counter stats for 'ls':I have a much simpler testcase than ls which I'll try to get someone at Intel to look at. For now, we're working around it in rr by using the results of the regular counter when the IN_TXCP counter's value is larger. This should work as long as an IN_TXCP overcount doesn't occur in an execution sequence that also uses HLE, and both of those are hopefully rare.
1,994,374 r5101c4
1,994,382 r2005101c4
http://robert.ocallahan.org/2016/11/handling-hardware-lock-elision-in-rr.html
|
Niko Matsakis: Parallel iterators, part 3: Consumers |
This post is the (long awaited, or at least long promised) third post in my series on Rayon’s parallel iterators. The previous two posts were some time ago, but I’ve been feeling inspired to push more on Rayon lately, and I remembered that I had never finished this blog post series.
Here is a list of the other posts in the series. If you haven’t read them, or don’t remember them, you will want to do so before reading this one:
This third post will introduce parallel consumers. Parallel
consumers are the dual to a parallel producer: they abstract out the
parallel algorithm. We’ll use this to extend beyond the sum()
action
and cover how we can implementation a collect()
operation that
efficiently builds up a big vector of data.
(Note: originally, I had intended this third post to cover how
combinators like filter()
and flat_map()
work. These combinators
are special because they produce a variable number of
elements. However, in writing this post, it became clear that it would
be better to first introduce consumers, and then cover how to extend
them to support filter()
and flat_map()
.)
In this post, we’ll cover two examples. The first will be the running example from the previous two posts, a dot-product iterator chain:
vec1.par_iter()
.zip(vec2.par_iter())
.map(|(i, j)| i * j)
.sum()
After that, we’ll look at a slight variation, where instead of summing up the partial products, we collect them into a vector:
let c: Vec<_> =
vec1.par_iter()
.zip(vec2.par_iter())
.map(|(i, j)| i * j)
.collect(); // <-- only thing different
In the second post, I introduced the basics of how parallel
iterators work. The key idea was the Producer
trait, which is a
variant on iterators that is amenable to “divide-and-conquer”
parallelization:
trait Producer: IntoIterator {
// Divide into two producers, one of which produces data
// with indices `0..index` and the other with indices `index..`.
fn split_at(self, index: usize) -> (Self, Self);
}
Unlike normal iterators, which only support extracting one element at
a time, a parallel producer can be split into two – and this can
happen again and again. At some point, when you think you’ve got small
enough pieces, you can convert it into an iterator (you see it extends
IntoIterator
) and work sequentially.
To see this in action, let’s revisit the sum_producer()
function
that I covered in my previous blog post;
sum_producer()
basically executes the sum()
operation, but
extracting data from a producer. Later on in the post, we’re going to
see how consumers abstract out the sum part of this code, leaving us
with a generic function that can be used to execute all sorts of
parallel iterator chains.
fn sum_producer<P>(mut producer: P, len: usize) -> i32
where P: Producer<Item=i32>
{
if len > THRESHOLD {
// Input too large: divide it up
let mid = len / 2;
let (left_producer, right_producer) = producer.split_at(mid);
let (left_sum, right_sum) = rayon::join(
|| sum_producer(left_producer, mid),
|| sum_producer(right_producer, len - mid));
left_sum + right_sum
} else {
// Input too small: sum sequentially
let mut sum = 0.0;
for value in producer {
sum += value;
}
sum
}
}
What we would like to do in this post is to try and make an abstract
version of this sum_producer()
function, one that can do all kinds
of parallel operations, rather than just summing up a list of numbers.
The way we do this is by introducing the notion of a parallel
consumer. Consumers represent the “action” at the end of the
iterator; they define what to do with each item that gets produced:
vec1.par_iter() // defines initial producer...
.zip(vec2.par_iter()) // ...wraps to make a new producer...
.map(|(i, j)| i * j) // ...wraps again...
.sum() // ...defines the consumer
The Consumer
trait looks like this. You can see it has a few more
moving parts than producers.
// `Item` is the type of value that the producer will feed us.
pub trait Consumer<Item>: Send + Sized {
// Type of value that consumer produces at the end.
type Result: Send;
// Splits the consumer into two consumers at `index`.
// Also returns a *reducer* for combining their results afterwards.
type Reducer: Reducer<Self::Result>;
fn split_at(self, index: usize) -> (Self, Self, Self::Reducer);
// Convert the consumer into a *folder*, which can sequentially
// process items one by one and produce a result.
type Folder: Folder<Item, Result=Self::Result>;
fn into_folder(self) -> Self::Folder;
}
The basic workflow for driving a producer/consumer pair is as follows:
split_at()
,
these can be split into two pairs and then those pairs can be
processed in parallel. Splitting a consumer also returns something
called a reducer, we’ll get to its role in a bit.into_iter()
and convert the consumer into
a folder using into_folder()
. You then draw items from the
producer and feed them to the folder. At the end, the folder
produces a result (of type C::Result
, where C
is the consumer
type) and this is returned.split_at()
).Let’s take a closer look at the folder and reducer. Folders are
defined by the Folder
trait, a simplified version of which
is shown below. They can be fed items one by one and, at the end,
produce some kind of result:
pub trait Folder<Item> {
type Result;
/// Consume next item and return new sequential state.
fn consume(self, item: Item) -> Self;
/// Finish consuming items, produce final result.
fn complete(self) -> Self::Result;
}
Of course, when we split, we will have two halves, both of which will
produce a result. Thus when a consumer splits, it also returns a
reducer that knows how to combine those results back
again. The Reducer
trait is shown below. It just consists
of a single method reduce()
:
pub trait Reducer<Result> {
/// Reduce two final results into one; this is executed after a
/// split.
fn reduce(self, left: Result, right: Result) -> Result;
}
sum_producer()
In effect, the consumer abstracts out the “parallel operation” that
the iterator is going to perform. Armed with this consumer trait, we
can now revisit the sum_producer()
method we saw before. That function
was specific to adding up a series of values, but we’d like to produce
an abstract version that works for any consumer. In the Rayon source,
this function is called bridge_producer_consumer
. Here is a
simplified version. It is helpful to compare it to sum_producer()
from before; I’ll include some “footnote comments” (like [1]
, [2]
)
to highlight those differences.
// `sum_producer` was specific to summing up a series of `i32`
// values, which produced another `i32` value. This version is generic
// over any producer/consumer. The consumer consumes `P::Item` (whatever
// the producer produces) and then the fn as a whole returns a
// `C::Result`.
fn bridge_producer_consumer<P, C>(len: usize,
mut producer: P,
mut consumer: C)
-> C::Result
where P: Producer, C: Consumer<P::Item>
{
if len > THRESHOLD {
// Input too large: divide it up
let mid = len / 2;
// As before, split the producer into two halves at the mid-point.
let (left_producer, right_producer) = producer.split_at(mid);
// Also divide the consumer into two consumers.
// This also gives us a *reducer* for later.
let (left_consumer, right_consumer, reducer) = consumer.split_at(mid);
// Parallelize the processing of the left/right halves,
// producing two results.
let (left_result, right_result) =
rayon::join(
|| bridge_producer_consumer(mid, left_producer, left_consumer),
|| bridge_producer_consumer(len - mid, right_producer, right_consumer));
// Finally, reduce the two intermediate results.
// In `sum_producer`, this was `left_result + right_result`,
// but here we use the reducer.
reducer.reduce(left_result, right_result)
} else {
// Input too small: process sequentially.
// Get a *folder* from the consumer.
// In `sum_producer`, this was `let mut sum = 0`.
let mut folder = consumer.into_folder();
// Convert producer into sequential iterator.
// Feed each item to the folder in turn.
// In `sum_producer`, this was `sum += item`.
for item in producer {
folder = folder.consume(item);
}
// Convert the folder into a result.
// In `sum_producer`, this was just `sum`.
folder.complete()
}
}
sum()
Next, let’s look at how one might implement the sum
consumer, so
that we can use it with bridge_producer_consumer()
. As before, we’ll
just focus on a sum
that works on i32
values, to keep things
relatively simple. We’ll start out by declaring a trio of three types
(consumer, folder, and reducer).
struct I32SumConsumer {
// This type requires no state. This will be important
// in the next post!
}
struct I32SumFolder {
// Current sum thus far.
sum: i32
}
struct I32SumReducer {
// No state here either.
}
Next, let’s implement the Consumer
trait for I32SumConsumer
:
impl Consumer for I32SumConsumer {
type Folder = I32SumFolder;
type Reducer = I32SumReducer;
type Result = i32;
// Since we have no state, "splitting" just means making some
// empty structs:
fn split_at(self, _index: usize) -> (Self, Self, Self::Result) {
(I32SumConsumer { }, I32SumConsumer { }, I32SumReducer { })
}
// Folder starts out with a sum of zero.
fn into_folder(self) -> Self::Folder {
I32SumFolder { sum: 0 }
}
}
The folder is also very simple. It takes each value and adds it to the current sum.
impl Folder<i32> for I32SumFolder {
type Result = i32;
fn consume(self, item: i32) -> Self {
// we take ownership the current folder
// at each step, and produce a new one
// as the result:
I32SumFolder { sum: self.sum + item }
}
fn complete(self) -> i32 {
self.sum
}
}
And, finally, the reducer just sums up two sums. The self
goes
unused since our reducer doesn’t have any state of its own.
impl Reducer<i32> for I32SumFolder {
fn reduce(self, left: i32, right: i32) -> i32 {
left + right
}
}
collect()
Now that we’ve built up this generic framework for consumers, let’s
put it to use by defining a second consumer. This time I want to
define how collect()
works; just like in sequential iterators,
collect()
allows users to accumulate the parallel items into a
collection. In this case, we’re going to examine one particular
variant of collect()
, which writes values into a vector:
let c: Vec<_> =
vec1.par_iter()
.zip(vec2.par_iter())
.map(|(i, j)| i * j)
.collect(); // <-- only thing different
In fact, internally, Rayon’s collect()
for vectors is
written in terms of a more efficient primitive,
collect_into()
. collect_into()
takes a mutable reference to a
vector and stores the results in there: this allows you to re-use a
pre-existing vector and avoid allocation overheads. It’s particularly
good for double buffering scenarios. To use collect_into()
explicitly, one would write something like:
let mut c: Vec<_> = vec![];
vec1.par_iter()
.zip(vec2.par_iter())
.map(|(i, j)| i * j)
.collect_into(&mut c);
collect_into()
first ensures that the vector has enough capacity for
the items in the iterator and then creates a particular consumer that,
for each item, will store it into the appropriate place in the vector.
We’re going to walk through a simplified version of the
collect_into()
consumer. This version will be specialized to vectors
of i32
values; moreover, it’s going to avoid any use of unsafe code
and just assume that the vector is initialized to the right length
(perhaps with 0
values). The real version works
for arbitrary types and avoids initialization by using a dab of unsafe
code (just about the only unsafe code in the parallel iterators part
of Rayon, actually).
Let’s start with the type definitions for the consumer, folder, and reducer. They look like this:
struct I32CollectVecConsumer<'c> {
data: &'c mut [i32],
}
struct I32CollectVecFolder<'c> {
data: &'c mut [i32],
index: usize,
}
struct I32SumReducer {
}
These type definitions kind of suggest to you an outline for this is
going to work. When the consumer starts, it has a mutable slice of
integers that it will eventually store into (the &'c mut [i32]
); the
lifetime 'c
here represents the span of time in which the collection
is happening. Remember that in Rust a mutable reference is also a
unique reference, which means that we don’t have to worry about
other threads reading or messing with our array while we store into
it.
When the time comes to switch to the folder, we still have a slice to store into, but now we also have an index. That tracks how many items we have stored thus far.
Finally, the reducer struct is empty, because once the values are stored, there really isn’t any data to reduce. For collect, the reduction step will just be a no-op.
OK, let’s see how the consumer trait is defined. The idea here is
simple: each time the consumer is split at some index N
, it splits
its mutable slice into two halves at N
, and returns two consumers, one with
each half:
impl<'c> Consumer for I32VecCollectConsumer<'c> {
type Folder = I32VecCollectFolder<'c>;
type Reducer = I32VecCollectReducer;
// The "result" of a `collect_into()` is just unit.
// We are executing this for its side effects.
type Result = ();
fn split_at(self, index: usize) -> (Self, Self, Self::Reducer) {
// Divide the slice into two halves at `index`:
let (left, right) = self.data.split_at_mut(index);
// Construct the new consumers:
(I32VecCollectConsumer { data: left },
I32VecCollectConsumer { data: right },
I32VecCollectReducer { })
}
// When we convert to a folder, give over the slice and start
// the index at 0.
fn into_folder(self) -> Self::Folder {
I32VecCollectFolder { data: self.data, index: 0 }
}
}
The folder trait is also pretty simple. Each time we consume a new
integer, we’ll store it into the slice and increment index
:
impl Folder<i32> for I32SumFolder {
type Result = ();
fn consume(self, item: i32) -> Self {
self.data[self.index] = item;
I32CollectVecFolder { data: self.data, index: self.index + 1 }
}
fn complete(self) {
}
}
Finally, since collect_into()
has no result, the “reduction” step
is just a no-op:
impl Reducer<()> for I32CollectVecFolder {
fn reduce(self, _left: (), _right: ()) {
}
}
This post continued our explanation of how Rayon’s parallel iterators
work. Whereas the previous post introduced parallel
producers, this post showed how we can abstract out parallel
consumers as well. Parallel consumers basically represent the
“parallel actions” at the end of a parallel iterator, like sum()
or
collect()
.
Using parallel consumers allows us to have one common routine,
bridge_producer_consumer()
, that is used to draw items from a
producer and feed them to a consumer. This routine thus defines
precisely the parallel logic itself, independent from any particular
parallel iterator. In future posts, we’ll discuss a bit how that same
routine can also use some adaptive techniques to try and moderate
splitting overhead automatically and dynamically.
I want to emphasize something about this post and the previous one:
you may have noticed a general lack of unsafe code. One of the very
cool things about Rayon is that the vast majority of the unsafety is
confined to the join()
implementation. For the most part, the
parallel iterators just build on this new abstraction.
It is hard to overstate the benefits of confining unsafe code in this
way. For one thing, I’ve caught a lot of bugs in the iterator code I
was writing. But even better, it means that it is relatively easy to
unit test and review parallel iterator PRs. We don’t have to worry
about crazy data-race bugs that only crop up if we test for hours and
hours. It’s enough to just make sure we use a variant of
bridge_producer_consumer()
that splits very deeply, so that we test
the split/recombine logic.
http://smallcultfollowing.com/babysteps/blog/2016/11/14/parallel-iterators-part-3-consumers/
|
Christopher Arnold |
http://ncubeeight.blogspot.com/2016/11/at-last-years-game-developers.html
|
Andreas Gal: Trump is dangerous but his supporters are not the enemy |
On November 8th, my chosen home has elected a racist, sexist, nativist, know-nothing, don’t care to know anything, narcissistic buffoon for president. During his campaign, Trump has made many outrageous statements and promises that are completely idiotic. I won’t bore you with trying to enumerate them. I am horrified and appalled that this orange circus peanut is our next president. I want to do more than just be upset about it, and I decided I’ll start with talking about it.
First to my fellow liberal citizens: please stop vilifying people who voted for Trump. They are not the problem and they are predominantly not like him. The world is globalizing and changing quickly, causing uncertainty and fear for many. That doesn’t make them bad people. In fact, they are the only people who can save us from Trump whenever the next election comes around. We need to embrace them, engage them, and try to convince them that there is a better way than Trump’s way.
Second, I would like to address my fellow citizens who voted Trump: You want change. I get that. I want change too. I agree with much of your resentment of Washington. I even agree that Hillary was a really uninspiring candidate (though I do think she would have made an ok president). The problem is that the guy you voted for is not going to change things for the better for you. Don’t believe me. Just believe him. Trump has a lifetime history of exploiting the weak and poor to enrich himself. He has bragged in the past how he exploits his influence to bend the law for profit, and how he exploits his fame to assault and degrade women. Stop justifying his behavior and stop pretending he’ll be any better as president than he has been as non-president for 70 years. Best case he’ll be just as bad as he was so far in his life. Worst case, he’ll be worse, and we’ll all pay the price.
I believe in Democracy. Trump is our president-elect. He’ll assume the office of the president on January 20, and all indications so far point towards a pretty disastrous presidency. It won’t be the end of the world as we know it, but it’s clear he meant every vile word he said as a candidate. He just confirmed he wants to “deport 2-3 million illegal immigrants” immediately. Thats almost 1% of the US population. And while he claims we’ll only deport criminals, just pause for a moment and think about the scale of this. He’ll go ahead and deport 3,000,000 individuals. Yes, that’s 6 zeroes. If we pack 30 people into a bus, thats 100,000 bus trips. And if we want to uphold our constitution and due process, judges we’ll have to order 3 MILLION TIMES to deport someone. The scale of this operation is absurd, and even if we get it right 99% of the time, we’ll end up deporting tens of thousands of U.S. citizens who don’t speak English well, or didn’t hire the right lawyer to defend them, or didn’t have the right paperwork, just as Operation Wetback did in the dark past.
Of course if you ask Donald Trump, he’ll tell you none of this will be the case, because he knows how to do all of this and it’ll be terrific and great. And this is the biggest problem with Donald Trump. He just isn’t that bright apparently, and pretty much believes in magic. Narcissists often do. Trump believes he is infallible, he believes he knows everything better, and he habitually ignores reality and facts. Unfortunately thats not how the real world works, and if you let someone like that steer the country, the consequences will be very real and very painful for a lot of people.
There is a a very high chance that we’ll have to resist Donald Trump. And I don’t mean in a violent sense. We are Americans. We cherish our democracy. So let’s stop talking about revolution. Donald Trump will have to be opposed peacefully and forcefully and legally, by convincing the majority of this country that Donald Trump’s way is not the American way. And, quite frankly, it’ll likely come down to all of us individually. I have very little faith in the GOP being able to stand up to Donald Trump’s authoritarian impulses. The GOP is Trump’s party now. Many in the GOP who seem like reasonable human beings have embraced Trump because they simply don’t have the backbone to oppose someone like Donald Trump. Paul Ryan is the best example of this. He has folded to Trump’s language and agenda time and time again. So don’t get your hopes up if Ryan says there won’t be a deportation force. Trump will ratchet up his aggressive language, and Ryan will fall in line. This has happened too many times before to hope it’ll change.
So its on us now as Americans to stand up for who we are. We are not Trump, even though he’ll be our president for some time. We may be flawed sometimes, but at our core we are a patriotic, civil, and brave people who believe in freedom and opportunity for everyone. I wasn’t born here but that’s why I decided to live here. I am proud to be an American and I am proud of my fellow Americans. We are all in this together, whether you voted for him or her. As long we don’t forget that, no harm will come to our country.
PS: Trump named a white nationalist as his senior advisor a couple hours after I posted this. Please wake up if you still think this isn’t going to be as bad as it seems.
https://andreasgal.com/2016/11/13/trump-is-dangerous-but-his-supporters-are-not-the-enemy/
|
Daniel Pocock: Are all victims of French terrorism equal? |
Some personal observations about the terrorist atrocities around the world based on evidence from Wikipedia and other sources
The year 2015 saw a series of distressing terrorist attacks in France. 2015 was also the 30th anniversary of the French Government's bombing of a civilian ship at port in New Zealand, murdering a photographer who was on board at the time. This horrendous crime has been chronicled in various movies including The Rainbow Warrior Conspiracy (1989) and The Rainbow Warrior (1993).
The Paris attacks are a source of great anxiety for the people of France but they are also an attack on Europe and all civilized humanity as well. Rather than using them to channel more anger towards Muslims and Arabs with another extended (yet ineffective) state of emergency, isn't it about time that France moved on from the evils of its colonial past and "drains the swamp" where unrepentant villains are thriving in its security services?
Francois Hollande and S'egol`ene Royal. Royal's brother G'erard Royal allegedly planted the bomb in the terrorist mission to New Zealand. It is ironic that Royal is now Minister for Ecology while her brother sank the Greenpeace flagship. If Francois and S'egol`ene had married (they have four children together), would G'erard be the president's brother-in-law or terrorist-in-law?
The question has to be asked: if it looks like terrorism, if it smells like terrorism, if the victim of that French Government attrocity is as dead as the victims of Islamic militants littered across the floor of the Bataclan, shouldn't it also be considered an act of terrorism?
If it was not an act of terrorism, then what is it that makes it differ? Why do French officials refer to it as nothing more than "a serious error", the term used by Prime Minister Manuel Valls during a recent visit to New Zealand in 2016? Was it that the French officials felt it was necessary for Libert'e, 'egalit'e, fraternit'e? Or is it just a limitation of the English language that we only have one word for terrorism, while French officials have a different word for such acts carried out by those who serve their flag?
If the French government are sincere in their apology, why have they avoided releasing key facts about the atrocity, like who thought up this plot and who gave the orders? Did the soldiers involved volunteer for a mission with the code name Op'eration Satanique, or did any other members of their unit quit rather than have such a horrendous crime on their conscience? What does that say about the people who carried out the orders?
If somebody apprehended one of these rogue employees of the French Government today, would they be rewarded with France's highest honour, like those tourists who recently apprehended an Islamic terrorist on a high-speed train?
If terrorism is such an absolute evil, why was it so easy for the officials involved to progress with their careers? Would an ex-member of an Islamic terrorist group be able to subsequently obtain US residence and employment as easily as the French terror squad's commander Louis-Pierre Dillais?
When you consider the comments made by Donald Trump recently, the threats of violence and physical aggression against just about anybody he doesn't agree with, is this the type of diplomacy that the US will practice under his rule commencing in 2017? Are people like this motivated by a genuine concern for peace and security, or are these simply criminal acts of vengence backed by political leaders with the maturity of schoolyard bullies?
https://danielpocock.com/are-all-victims-of-french-terrorism-equal
|
Cameron Kaiser: 45.5.0 final available |
Meanwhile, I still don't have a good understanding of what's wrong with Amazon Music (still works great in 38.10), nor the issue with some users being unable to make changes to their default search engine stick. This is the problem with a single developer, folks: what I can't replicate I can't repair. I have a couple other theories in that thread for people to respond to.
Next up will be actually ripping some code out for a change. I'm planning to completely eviscerate telemetry support since we have no infrastructure to manage it and it's wasted code, as well as retina Mac support, since no retina Mac can run 10.6. I don't anticipate these being major speed boosts but they'll help and they'll make the browser smaller. Since we don't have to maintain good compatibility with Mozilla source code anymore I have some additional freedom to do bigger surgeries like these. I'll also make a first cut at the non-volatile portion of IonPower-NVLE by making floating point registers in play non-volatile (except for the volatiles like f1 that the ABI requires to be live also); again, not a big boost, but it will definitely reduce stack pressure and should improve the performance of ABI-compliant calls. User agent switching and possibly some more AltiVec VP9 work are also on the table, but may not make 45.6.
The other thing that needs to be done is restoring our ability to do performance analysis because Shark and Sample on 10.4 freak out trying to resolve symbols from these much more recent gcc builds. The solution would seem to be a way to get program counter samples without resolving them, and then give that to a tool like addr2line or even gdb7 itself to do the symbol resolution instead, but I can't find a way to make either Shark or Sample not resolve symbols. Right now I'm disassembling /usr/bin/sample (since Apple apparently doesn't offer the source code for it) to see how it gets those samples and it seems to reference a mysterious NSSampler in the CHUD VM tools private framework. Magic Hat can dump the class but the trick is how to work with it and which selectors it will allow. More on that later.
http://tenfourfox.blogspot.com/2016/11/4550-final-available.html
|
Emma Irwin: Diversity & Inclusion for Participation – “A Plan for Strategy” |
In the most recent Heartbeat, I consulted with Mozilla’s Diversity & Inclusion lead Larissa Shapiro, and others championing the discussion , about a strategy for D&I in Participation. I’m really excited and passionate about this work, and even though this is very, very early (this is only a plan for a strategy), I wanted to share now for the opportunity of gathering the most feedback.
Note: I’m using screenshots from a presentation, but have included the actual text in image alt-tags for accessibility.
Right now the proposed ‘Plan for a Strategy’ as three phases:
Designing a strategy for D&I will have some unique challenges. We know this. To get started we need to understand where we are now, who we are, why we are as we are — and what attitudes and practices exist that enhance, or restrict our ability to effectively bring in, and sustain the participation of diverse groups.
The first phase is all about gaining insights into these and other important questions through focus groups, interviews and – and existing data.
Insight gathering and research will be focused in these key areas:
By Phase 2 – we’ll have formed a number of important hypothesis for influencing D&I in investment areas aligned with Mozilla’s overall D&I strategy. Investment areas are currently proposed to be:
Experimentation is critical to developing a D&I Strategy for Participation. And although it’s identified here as a single ‘phase’, I envision experimentation, learning and iterating on what we learn – to be THE process of building a diverse and inclusive Participation at Mozilla.
Here’s the current timeline:
I would love to hear your ideas, concerns, feedback on this ‘proposal’ which WILL itself evolve.
http://tiptoes.ca/diversity-inclusion-for-participation-a-plan-for-strategy/
|
Daniel Stenberg: curl and TLS 1.3 |
Draft 18 of the TLS version 1.3 spec was published at the end of October 2016.
Already now, both Firefox and Chrome have test versions out with TLS 1.3 enabled. Firefox 52 will have it by default, and while Chrome will ship it, I couldn’t figure out exactly when we can expect it to be there by default.
Over the last few days we’ve merged TLS 1.3 support to curl, primarily in this commit by Kamil Dudka. Both the command line tool and libcurl will negotiate TLS 1.3 in the next version (7.52.0 – planned release date at the end of December 2016) if built with a TLS library that supports it and told to do it by the user.
The two current TLS libraries that will speak TLS 1.3 when built with curl right now is NSS and BoringSSL. The plan is to gradually adjust curl over time as the other libraries start to support 1.3 as well. As always we will appreciate your help in making this happen!
Right now, there’s also the minor flux in that servers and clients may end up running implementations of different draft versions of the TLS spec which contributes to a layer of extra fun!
Three TLS current 1.3 test servers to play with: https://enabled.tls13.com/ , https://www.allizom.org/ and https://tls13.crypto.mozilla.org/. I doubt any of these will give you any guarantees of functionality.
TLS 1.3 offers a few new features that allow clients such as curl to do subsequent TLS connections much faster, with only 1 or even 0 RTTs, but curl has no code for any of those features yet.
|
Air Mozilla: Foundation Demos November 11 2016 |
Foundation Demos November 11 2016
|
Robert Kaiser: My Thoughts on Next-Generation Themes |
http://home.kairo.at/blog/2016-11/my_thoughts_on_next_generation_themes
|
About:Community: Firefox 50 new contributors |
With the release of Firefox 50, we are pleased to welcome the 43 developers who contributed their first code change to Firefox in this release, 32 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:
https://blog.mozilla.org/community/2016/11/11/firefox-50-new-contributors/
|
Christian Heilmann: Hacking Oredev’s after hours: Sharing our Coder Privilege (video, slides, talking points) |
The original plan at the first evening of this year’s Oredev was for me to interview Peter Sunde about the history of Pirate Bay as covered in his SmashingConf Barcelona “Technology Is Neither Good Nor Bad — You Are” talk.
As Peter couldn’t come and the massive news of the US or the voting system choosing Donald Trump as the president I quickly changed my plans. Instead, I wrote a talk explaining the very random way I got to become a professional developer and that it is our duty as privileged people now to share our knowledge with those not as lucky.
After the talk I invited a very distraught Rob Conery, author of The Imposter’s Handbook to help share some cheerful and amusing anectodes in his history. We ended up with some actionable ideas how to learn more and not listen to the inner voice that keeps telling us we’re not good enough.
Here’s the video of the hour of information on Vimeo:
The slides of the talk are on Slideshare.
Here are some of the points of the slides:
Things I learned
Hello View Source
Here is where we come in.
Getting started has never been easier…
You’re building on existing solutions…
One main thing i learned in my whole career…
Use your frustration, your anger and your deviousness for good…
The web is the most versatile and non-elite platform. Go and make your mark!
|
Mark Banner: WebExtensions: An Example Add-on Repository with Test Harnesses |
I’ve created an example repository for how you might set up tools to help development of a WebExtension. Whilst there are others around, I’ve not heard of one that includes examples of tools for testing and auditing your extension.
It is based on various ideas from projects I’ve been working alongside recently.
The repository is intended to either be used as a starting point for constructing a new WebExtension, or you can take the various components and integrate them into your own repository.
It is based around node/npm and the web-ext command line tool to keep it simple as possible. In addition it contains setup for:
All of these are also run automatically on landing or pull request via Travis Ci with Coveralls providing code coverage reports.
Finally, there’s a tool enabled on the repository for helping to keep modules up to date.
If you find it helpful, let me know in the comment section. Please raise any issues that you find, or submit pull requests, I welcome either.
|
Aki Sasaki: dephash 0.3.0 |
|