-Поиск по дневнику

Поиск сообщений в rss_planet_mozilla

 -Подписка по e-mail

 

 -Постоянные читатели

 -Статистика

Статистика LiveInternet.ru: показано количество хитов и посетителей
Создан: 19.06.2007
Записей:
Комментариев:
Написано: 7

Planet Mozilla





Planet Mozilla - https://planet.mozilla.org/


Добавить любой RSS - источник (включая журнал LiveJournal) в свою ленту друзей вы можете на странице синдикации.

Исходная информация - http://planet.mozilla.org/.
Данный дневник сформирован из открытого RSS-источника по адресу http://planet.mozilla.org/rss20.xml, и дополняется в соответствии с дополнением данного источника. Он может не соответствовать содержимому оригинальной страницы. Трансляция создана автоматически по запросу читателей этой RSS ленты.
По всем вопросам о работе данного сервиса обращаться со страницы контактной информации.

[Обновить трансляцию]

Mike Hoye: Switching Sides

Вторник, 15 Ноября 2016 г. 00:48 + в цитатник

Toronto Skyline

I’ve been holding off on a laptop refresh at work for a while, but it’s time. The recent Apple events have been less than compelling; I’ve been saying for a long time that Mozilla needs more people in-house living day to day on Windows machines and talk is cheaper than ever these days, so.

I’m taking notes here of my general impressions as I migrate from a Macbook Pro to a Surface Book and Windows 10.

I’ll add to them as things progress, but for now let’s get started.

  • I don’t think highly of unboxing fetishism, but it’s hard to argue against the basic idea that your very tactile first contact with a product should be a good one. The Surface Book unboxing is a bit rough, but not hugely so; there’s the rare odd mis-step like boxes that are harder than necessary to open or tape that tears the paper off the box.
  • I’ve got the Performance Base on the Surface Pro here; the very slight elevation of the keyboard makes a surprisingly  pleasant difference, and the first-run experience is pretty good too. You can tell Microsoft really, really wants you to accept the defaults, particularly around data being sent back to Microsoft, but you can reasonably navigate that to your comfort level it looks like. Hard to say, obvs.
  • I’m trying to figure out what is a fair assessment of this platform vs. what is me fighting muscle memory. Maybe there’s not a useful distinction to be made there but considering my notable idiosyncrasies I figure I should make the effort. If I’m going to pretend this is going to be useful for anyone but some alternate-universe me, I might as well. This came up in the context of multiple desktops – I use the hell out of OSX multiple desktops, and getting Windows set up to do something similar requires a bit of config twiddling and some relearning.

    The thing I can’t figure out here is the organizational metaphor. Apple has managed to make four-fingered swiping around multiple desktop feel like I’m pushing stuff around a physical space, but Windows feels like I’m using a set of memorized gestures to navigate a phone tree. This is a preliminary impression, but it feels like I’m going to need to just memorize this stuff.

  • In a multiple desktops setting, the taskbar will only show you the things running in your current desktop, not all of them? So crazymaking. [UPDATE: Josh Turnath in the comments turns out that you can set this right in the “multitasking” settings menu, where you can also turn off the “When I move one window, move other windows” settings which are also crazymaking. Thanks, Josh!]
  • If you’re coming off a Mac trackpad and used to tap-to-click, be sure to set the delay setting to “Short delay” or it feels weird and laggy. Long delay is tap, beat, beat, response; if you move the cursor the action vanishes. That, combined with the fact that it’s not super-great at rejecting unintentional input makes it mostly tolerable but occasionally infuriating, particularly if you’ve got significant muscle memory built up around “put cursor here then move it aside so you can see where you’re typing”, which makes it start selecting text seemingly at random. It’s way  better than any other trackpad I’ve ever used on a PC for sure, so I’ll take it, but still occasionally: aaaaaaargh. You’re probably better just turning tap-to-click off. UPDATE: I had to turn off tap to click, because omgwtf.
  • In this year of our lord two thousand and sixteen you still need to merge in quasi-magic registry keys to remap capslock . If you want mousewheel scrolling to work in the same directions as two-finger scrolling, you need to fire up RegEdit.exe and know the magic incantations. What the hell.
  • It’s surprising how seemingly shallow the Win10 redesign is. The moment you go into the “advanced options” you’re looking at the the same dialogs you’ve known and loved since WinXP. It’s weird how unfinished it feels in places. Taskbar icons fire off on a single click, but you need to flip a checkbox five layers deep in one of those antiquated menus to make desktop icons do the same.  The smorgasbords you get for right-clicking things look like a room full of mismanaged PMs screaming at each other.
  • You also have to do a bunch of antiquated checkbox clickery to install the Unix subsystem too, but complaining about a dated UI when you’re standing up an ersatz Linux box seems like the chocolate-and-peanut-butter of neckbearded hypocrisy, so let’s just agree to not go there. You can get a Linux subsystem on Windows now, which basically means you can have Linux and modern hardware with working power management and graphics drivers at the same time, which is pretty nice.
  • Pairing Apple’s multitouch trackpads with Windows only gets you one- and two-fingered gestures. C’mon. Really?
  • This is a common consensus here, after asking around a bit. Perplexity that Microsoft would put an enormous (and ultimately successful) effort into re-pinning and hardening the foundations underneath the house, recladding it and putting in an amazing kitchen, but on the 2nd floor the hinges on the wrong side of the doors and there’s a stair missing on the way to the basement.
  • I’m not surprised the Windows Store isn’t the go-to installer mechanism yet – that’s true on Macs, too – but my goodness pickings there are pretty slim. Somehow I have to go visit all these dodgy-looking websites to get the basic-utilities stuff sorted out, and it feels like an outreach failure of some kind. This is vaguely related to my next point, that:
  • The selection of what does vs. doesn’t come preinstalled is… strange. I feel like Microsoft has space to do something really interesting here that they’re not capitalizing on for some reason. Antitrust fears? I dunno. I just feel like they could have shipped this with, say, Notepad++ and a few other common utilities preinstalled and made a lot of friends.
  • The breakaway power cables are fantastic. A power brick with fast-charge USB built in and freeing up slots on the machine proper is extremely civilized. You can be sitting with your legs crossed and have the power plugged in, which I sincerely miss being able to do with underpowered 1st-gen Macbook Air chargers back in the mists of prehistory.
  • The Surface Dock is basically perfect. Power, Ethernet, two DisplayPorts and four USB ports over that same breakaway cable is excellent. If you’ve ever used a vintage IBM Thinkpad docking station, this is something you’ve been wishing Apple would make for the better part of a decade.
  • I assumed “Skype Preview” was a preview version of Skype. I wanted (and pay for) the whole thing, so I immediately uninstalled that and installed normal Skype, which it turns out is really outdated-looking and ugly on Win10. I was bewildered about why a premiere Microsoft-owned thing like Skype would look ugly on their flagship OS, so I did some research and discovered that “Skype Preview” isn’t a preview version of Skype. It’s the prettified modern Win10 version. So I reinstalled it and uninstalled Skype. I’m sure this is somehow my fault for not understanding this but in my defense: words mean things.
  • This hardware is really nice. The hinge works great, eject to tablet is crisp and works well, reversing it to the easel setup is both surprisingly good and for-real useful.

Anyway, this is where I am so far. More notes as I think of them.

Update:

  • Definitely turn off the two-finger-tap-to-right-click option – if you don’t and you’ve got fat hands like mine, sometimes it will get into a state where everything is a right-click, which is inexplicable and upsetting.
  • I saw my first tripped-over USB-C cable send a Macbook crashing to the floor today. I suspect it will not be the last.

http://exple.tive.org/blarg/2016/11/14/switching-sides/


Eitan Isaacson: Pain Management

Понедельник, 14 Ноября 2016 г. 23:45 + в цитатник

Close your eyes, take a deep breath and fast forward four years when Trump’s administration will be roundly rejected. Feel in your body the hope that will overwhelm you. Conjure up your future restored faith in humanity when people from all walks of life stand together against hate.

I have never felt so pessimistic and defeated than in the last week. The press will normalize him, the Democratic minority will indulge him, and voters will grow apathetic and disengaged. No matter the scandals, past and future, that will embroil Trump and his goons; they will continue to consolidate power. He can’t be “exposed” for who he really is, it has been in plain sight all along.

Trump and his kind will not go away, and the establishment is not coming back to rescue us. Cory Booker, Joe Biden, Michelle Obama, or any other Democrat star will not pull us out of this tailspin. They will try to convince us, as Hillary tried, that the electorate will embrace a competent centrist. They won’t, and the right will only grow in influence. This nightmare can endure for decades.

The only thing that will save us from a populist racist oligarch demagogue is a populist anti-racist anti-neoliberal progressive with a mobilized movement behind them and serious contenders up and down the ticket.

Before Trump got elected, we already had our work cut out for us: Black Lives Matter, equitable health care, reproductive rights, equal pay, housing justice, clean water, criminal justice, equitable education, prison abolition, gender justice, free Palestine, voting rights, indigenous rights, refugee rights, migrant rights, campaign finance reform, friggin’ climate change and climate justice.

We don’t get to put those issues aside until the next Obama is elected and we clear up our heads. We *amplify* those struggles and use their leverage to restore democracy and propel us forward to a revolution. We don’t have four years, we need to have our ducks in a row for the midterms in two.

Am I being too preachy? I’m sorry. This is my catharsis. It’s the take-charge method Penny Simkin recently taught us in class.

There are countless people who’s security and future are called into question with this turn of events. If you, like me, are shrouded in privilege – don’t let it paralyze you. Don’t be an ally, be an actor. Own this struggle. I would bring up that famous Murri quote, but you already know it.

I know many smart, strategic, and dedicated people who work on this stuff every day, and I am so humbled and thankful they do this work and the sacrifices they make.

We are anticipating a wonderful life transition soon (more on that later?), as the dust settles from this election and as we find our stride as a family I look forward to working on our liberation with you all.

Close your eyes, take a deep breath, and borrow just a little bit of the hope and restored faith you will feel in four years.


https://blog.monotonous.org/2016/11/14/pain-management/


Air Mozilla: Mozilla Weekly Project Meeting, 14 Nov 2016

Понедельник, 14 Ноября 2016 г. 22:00 + в цитатник

Daniel Stenberg: I have toyota corola

Понедельник, 14 Ноября 2016 г. 13:54 + в цитатник

Modern cars have fancy infotainment setups, big screens and all sorts of computers with networked functionality built-in. Part of that fanciness is increasingly often a curl install. curl is a part of the standard GenIVI and Tizen offers for cars and is used in lots of other independent software installs too.

This usually affects my every day very little. Sure I’m thrilled over hundreds of millions of more curl installations in the world but the companies that ship them don’t normally contact me and curl is a really stable product by now so not a lot of them speak up on the issue trackers or mailing lists either (or if they do, they don’t tell us where they come from or what they’re working on).

Toyota CorollaThe main effect is that normal end users find my email address via the curl license text in products in cars to a higher degree. They usually find it in the about window or an open source license listing or similar. Often I suspect my email address is just about the only address listed.

This occasionally makes desperate users who have tried everything  to eventually reach out to me. They can’t fix their problem but since my email exists in their car, surely I can!

Here are three of my favorite samples that I saved.

November 13, 2016

Hello sir
I have Avalon 2016
Regarding the audio player, why there delay between audio and video when connect throw Bluetooth and how to fix it.

November 5, 2015

Hello,
I am using in a new Ford Mondeo the navigation system with SD Card FM5T-19H449-FC Europe F4.
I can read the card but  not write on it. I want to add to the card some POI's. Can you help me to do it?

June 8, 2015

Hello

I have toyota corola with multimedya system that you have its copyright.
I need a advice to know how to use the gps .
Now i cant use or see maps.
And i want to know how to add hebrew leng.

How do I respond?

I’m sad to say that I rarely respond at all. I can’t help them and I’ve learned over the years that just trying to explain how I have nothing to do with the product they’re using is often just too time consuming and energy draining to be worth it. I hope these people found the answers to the problems via other means.

The hacker news discussions on this post took off. I just want to emphasize that this post is not a complaint. I’m not whining over this. I’m just showing some interesting side-effects of my email in the license text. I actually find these emails interesting, sometimes charming and they help me connect to the reality many people experience out there.

Related: The Instagram and Spotify Hacking Ring

https://daniel.haxx.se/blog/2016/11/14/i-have-toyota-corola/


Myk Melez: Why Embedding Matters

Понедельник, 14 Ноября 2016 г. 11:52 + в цитатник

Lately, I’ve been thinking about what a new embedding strategy for Mozilla might look like. Mozilla has a great deal of history with embedding, and Gecko has long been (and still is) used in a variety of products besides Firefox. But lately the organization hasn’t prioritized embedding, and the options for it have dwindled.

Nevertheless, embedding still matters for Mozilla’s primary rendering engine, including the recently-announced Quantum, because it provides the “web compatibility defense” of expanded and diverse market share.

The more the engine is used in the world, and the more familiar web developers are with it, the more they’ll consider it (and web compatibility generally) when designing and building web applications.

Embedding also matters because users of web software (like a web browser) benefit from a fast and secure rendering engine with a user-friendly feature set, whether or not that software is provided by Mozilla.

Mozilla can mediate the Web most directly with Firefox itself, but it’ll never be the only provider of web software, and it can extend its influence (albeit with less control over the experience) by enabling other developers to reuse its engine in their products.

Finally, embedding matters because open source software components benefit from reuse, which increases project participation (bug reports and fixes, ports, market research data, etc.) and improves those components for all their consumers, including their primary/original ones.

“This technology could fall into the right hands.”

Over the next few weeks, I’ll blog more about the kinds of use cases an embedding strategy might address, and the kinds of projects that might satisfy those use cases.

Note that this is my personal blog (although I’m a Mozilla employee), and nothing here should be construed to represent official Mozilla plans and priorities.

https://mykzilla.org/2016/11/14/why-embedding-matters/


Myk Melez: Syndicating to Medium

Понедельник, 14 Ноября 2016 г. 11:06 + в цитатник

I’ve been experimenting with syndicating my blog posts to Medium. While I appreciate the syndicated, webby nature of the blogosphere, Medium has an appealing sense of place. It reminds me of the old Open Salon. And I’m curious how my posts will play there.

If you’re curious too, this post should link to its Medium equivalent—at least if you’re reading it on my blog, rather than Planet or another syndicator. Otherwise, you can find my posts and follow me on my Medium profile.

https://mykzilla.org/2016/11/14/syndicating-to-medium/


Robert O'Callahan: Handling Hardware Lock Elision In rr

Понедельник, 14 Ноября 2016 г. 09:41 + в цитатник

Intel's Hardware Lock Elision feature lets you annotate instructions with prefixes to indicate that they perform lock/unlock operations. The CPU then turns those into hardware memory transactions so that the instructions in the locked region are performed speculatively and only committed at the unlock. The difference between HLE and the more capable RTM transactional memory support is that HLE is supposed to be fully transparent. The prefixes are ignored on non-HLE-supporting CPUs so you can just add them to your code and things will hopefully get faster --- no CPUID checks are necessary. Unfortunately, by default, Intel's hardware performance counters count events in aborted transactions, even though they didn't really happen in terms of user-space effects. Thus when rr records a program that uses HLE, our conditional branch counter may report a value higher than the number of conditional branches that "really" executed, and this breaks rr. (FWIW we discovered this problem when Emilio was using rr to debug intermittent failures in Servo using the latest version of the Rust parking_lot crate.)

For RTM we have some short-term hacks to disable RTM usage in glibc, and the medium-term solution is to use "CPUID faulting" to trap CPUID and modify the feature bits to pretend RTM is not supported. This approach doesn't work for HLE because there is no need to check CPUID before using it.

Fortunately Intel provides an IN_TXCP flag that you can set on a hardware performance counter to indicate that it should not count events in aborted transactions. This is exactly what we need. However, for replay we need to be able to program the PMU to send an interrupt after a certain number of events have occurred, and the Linux kernel prevents us from doing that for IN_TXCP counters. Apparently that's because if you request an interrupt after a small number of events and then execute an HLE transaction that generates more than that number of events, the CPU will detect the overflow, abort the transaction, roll the counter back to its pre-transaction value, then the kernel notices there wasn't really an overflow, restarts the transaction, and you're in an infinite loop.

The solution to our dilemma is to use two counters to count conditional branches. One counter is used to generate interrupts, and it is allowed to count events in aborted transactions. Another counter uses IN_TXCP to avoid counting events in aborted transactions, and we use this counter only for measurement, never for generating interrupts. This setup works well. It means that during replay our interrupt might fire early, because the interrupt counter counted events in aborted transactions, but that's OK because we already have a mechanism to carefully step forward to the correct stopping point.

There is one more wrinkle. While testing this new approach I noticed that there are some cases where the IN_TXCP counter reports spurious events. This is obviously a nasty little bug in the hardware, or possibly the kernel. On my system you can reproduce it just by running perf stat -e r5101c4 -e r2005101c4 ls --- the second event is just the IN_TXCP version of the first event (retired conditional branches), so should always report counts less than or equal to the first event, but I get results like

 Performance counter stats for 'ls':
1,994,374 r5101c4
1,994,382 r2005101c4
I have a much simpler testcase than ls which I'll try to get someone at Intel to look at. For now, we're working around it in rr by using the results of the regular counter when the IN_TXCP counter's value is larger. This should work as long as an IN_TXCP overcount doesn't occur in an execution sequence that also uses HLE, and both of those are hopefully rare.

http://robert.ocallahan.org/2016/11/handling-hardware-lock-elision-in-rr.html


Niko Matsakis: Parallel iterators, part 3: Consumers

Понедельник, 14 Ноября 2016 г. 08:00 + в цитатник

This post is the (long awaited, or at least long promised) third post in my series on Rayon’s parallel iterators. The previous two posts were some time ago, but I’ve been feeling inspired to push more on Rayon lately, and I remembered that I had never finished this blog post series.

Here is a list of the other posts in the series. If you haven’t read them, or don’t remember them, you will want to do so before reading this one:

  1. The first post, “Foundations”, explains how sequential iterators work. It is also a nice introduction to some of the key techniques for zero-cost abstraction.
  2. The second post, “Producers”, then shows how we can adapt the sequential iterator approach to permit parallel iteration. It focuses on the concept of parallel producers: these are basically splittable iterators. They give you the ability to say “break this producer into two producer, one of which produces the left half, and one the right half”. You can then process those two halves in parallel. When the number of work items gets small enough, you can convert a producer into a sequential iterator and consume it sequentially.

This third post will introduce parallel consumers. Parallel consumers are the dual to a parallel producer: they abstract out the parallel algorithm. We’ll use this to extend beyond the sum() action and cover how we can implementation a collect() operation that efficiently builds up a big vector of data.

(Note: originally, I had intended this third post to cover how combinators like filter() and flat_map() work. These combinators are special because they produce a variable number of elements. However, in writing this post, it became clear that it would be better to first introduce consumers, and then cover how to extend them to support filter() and flat_map().)

Motivating example

In this post, we’ll cover two examples. The first will be the running example from the previous two posts, a dot-product iterator chain:

vec1.par_iter()
    .zip(vec2.par_iter())
    .map(|(i, j)| i * j)
    .sum()

After that, we’ll look at a slight variation, where instead of summing up the partial products, we collect them into a vector:

let c: Vec<_> =
  vec1.par_iter()
      .zip(vec2.par_iter())
      .map(|(i, j)| i * j)
      .collect(); // <-- only thing different

Review: parallel producers

In the second post, I introduced the basics of how parallel iterators work. The key idea was the Producer trait, which is a variant on iterators that is amenable to “divide-and-conquer” parallelization:

trait Producer: IntoIterator {
  // Divide into two producers, one of which produces data
  // with indices `0..index` and the other with indices `index..`.
  fn split_at(self, index: usize) -> (Self, Self);
}

Unlike normal iterators, which only support extracting one element at a time, a parallel producer can be split into two – and this can happen again and again. At some point, when you think you’ve got small enough pieces, you can convert it into an iterator (you see it extends IntoIterator) and work sequentially.

To see this in action, let’s revisit the sum_producer() function that I covered in my previous blog post; sum_producer() basically executes the sum() operation, but extracting data from a producer. Later on in the post, we’re going to see how consumers abstract out the sum part of this code, leaving us with a generic function that can be used to execute all sorts of parallel iterator chains.

fn sum_producer<P>(mut producer: P, len: usize) -> i32
    where P: Producer<Item=i32>
{
  if len > THRESHOLD {
    // Input too large: divide it up
    let mid = len / 2;
    let (left_producer, right_producer) = producer.split_at(mid);
    let (left_sum, right_sum) = rayon::join(
      || sum_producer(left_producer, mid),
      || sum_producer(right_producer, len - mid));
    left_sum + right_sum
  } else {
    // Input too small: sum sequentially
    let mut sum = 0.0;
    for value in producer {
      sum += value;
    }
    sum
  }
}

Enter parallel consumers

What we would like to do in this post is to try and make an abstract version of this sum_producer() function, one that can do all kinds of parallel operations, rather than just summing up a list of numbers. The way we do this is by introducing the notion of a parallel consumer. Consumers represent the “action” at the end of the iterator; they define what to do with each item that gets produced:

vec1.par_iter()           // defines initial producer...
    .zip(vec2.par_iter()) // ...wraps to make a new producer...
    .map(|(i, j)| i * j)  // ...wraps again...
    .sum()                // ...defines the consumer

The Consumer trait looks like this. You can see it has a few more moving parts than producers.

// `Item` is the type of value that the producer will feed us.
pub trait Consumer<Item>: Send + Sized {
  // Type of value that consumer produces at the end.
  type Result: Send;

  // Splits the consumer into two consumers at `index`.
  // Also returns a *reducer* for combining their results afterwards.
  type Reducer: Reducer<Self::Result>;
  fn split_at(self, index: usize) -> (Self, Self, Self::Reducer);

  // Convert the consumer into a *folder*, which can sequentially
  // process items one by one and produce a result.
  type Folder: Folder<Item, Result=Self::Result>;
  fn into_folder(self) -> Self::Folder;
}

The basic workflow for driving a producer/consumer pair is as follows:

  1. You start out with one producer/consumer pair; using split_at(), these can be split into two pairs and then those pairs can be processed in parallel. Splitting a consumer also returns something called a reducer, we’ll get to its role in a bit.
  2. At some point, to process sequentially, you convert the producer into an iterator using into_iter() and convert the consumer into a folder using into_folder(). You then draw items from the producer and feed them to the folder. At the end, the folder produces a result (of type C::Result, where C is the consumer type) and this is returned.
  3. As we walk back up the stack, at each point where we had split the consumer into two, we now have two results, which must be combined using the reducer (also returned by split_at()).

Let’s take a closer look at the folder and reducer. Folders are defined by the Folder trait, a simplified version of which is shown below. They can be fed items one by one and, at the end, produce some kind of result:

pub trait Folder<Item> {
  type Result;
  
  /// Consume next item and return new sequential state.
  fn consume(self, item: Item) -> Self;
  
  /// Finish consuming items, produce final result.
  fn complete(self) -> Self::Result;
}

Of course, when we split, we will have two halves, both of which will produce a result. Thus when a consumer splits, it also returns a reducer that knows how to combine those results back again. The Reducer trait is shown below. It just consists of a single method reduce():

pub trait Reducer<Result> {
  /// Reduce two final results into one; this is executed after a
  /// split.
  fn reduce(self, left: Result, right: Result) -> Result;
}

Generalizing sum_producer()

In effect, the consumer abstracts out the “parallel operation” that the iterator is going to perform. Armed with this consumer trait, we can now revisit the sum_producer() method we saw before. That function was specific to adding up a series of values, but we’d like to produce an abstract version that works for any consumer. In the Rayon source, this function is called bridge_producer_consumer. Here is a simplified version. It is helpful to compare it to sum_producer() from before; I’ll include some “footnote comments” (like [1], [2]) to highlight those differences.

// `sum_producer` was specific to summing up a series of `i32`
// values, which produced another `i32` value. This version is generic
// over any producer/consumer. The consumer consumes `P::Item` (whatever
// the producer produces) and then the fn as a whole returns a
// `C::Result`.
fn bridge_producer_consumer<P, C>(len: usize,
                                  mut producer: P,
                                  mut consumer: C)
                                  -> C::Result
    where P: Producer, C: Consumer<P::Item>
{
  if len > THRESHOLD {
    // Input too large: divide it up
    let mid = len / 2;
    
    // As before, split the producer into two halves at the mid-point.
    let (left_producer, right_producer) = producer.split_at(mid);

    // Also divide the consumer into two consumers.
    // This also gives us a *reducer* for later.
    let (left_consumer, right_consumer, reducer) = consumer.split_at(mid);
        
    // Parallelize the processing of the left/right halves,
    // producing two results.
    let (left_result, right_result) =
      rayon::join(
        || bridge_producer_consumer(mid, left_producer, left_consumer),
        || bridge_producer_consumer(len - mid, right_producer, right_consumer));
        
    // Finally, reduce the two intermediate results.
    // In `sum_producer`, this was `left_result + right_result`,
    // but here we use the reducer.
    reducer.reduce(left_result, right_result)
  } else {
    // Input too small: process sequentially.
    
    // Get a *folder* from the consumer.
    // In `sum_producer`, this was `let mut sum = 0`.
    let mut folder = consumer.into_folder();
    
    // Convert producer into sequential iterator.
    // Feed each item to the folder in turn.
    // In `sum_producer`, this was `sum += item`.
    for item in producer {
      folder = folder.consume(item);
    }
    
    // Convert the folder into a result.
    // In `sum_producer`, this was just `sum`.
    folder.complete()
  }
}

Implementing the consumer for sum()

Next, let’s look at how one might implement the sum consumer, so that we can use it with bridge_producer_consumer(). As before, we’ll just focus on a sum that works on i32 values, to keep things relatively simple. We’ll start out by declaring a trio of three types (consumer, folder, and reducer).

struct I32SumConsumer {
  // This type requires no state. This will be important
  // in the next post!
}
struct I32SumFolder {
  // Current sum thus far.
  sum: i32
}
struct I32SumReducer {
  // No state here either.
}

Next, let’s implement the Consumer trait for I32SumConsumer:

impl Consumer for I32SumConsumer {
  type Folder = I32SumFolder;
  type Reducer = I32SumReducer;
  type Result = i32;
  
  // Since we have no state, "splitting" just means making some
  // empty structs:
  fn split_at(self, _index: usize) -> (Self, Self, Self::Result) {
    (I32SumConsumer { }, I32SumConsumer { }, I32SumReducer { })
  }

  // Folder starts out with a sum of zero.
  fn into_folder(self) -> Self::Folder {
    I32SumFolder { sum: 0 }
  }
}

The folder is also very simple. It takes each value and adds it to the current sum.

impl Folder<i32> for I32SumFolder {
  type Result = i32;
  
  fn consume(self, item: i32) -> Self {
    // we take ownership the current folder
    // at each step, and produce a new one
    // as the result:
    I32SumFolder { sum: self.sum + item }
  }
    
  fn complete(self) -> i32 {
    self.sum
  }
}

And, finally, the reducer just sums up two sums. The self goes unused since our reducer doesn’t have any state of its own.

impl Reducer<i32> for I32SumFolder {
  fn reduce(self, left: i32, right: i32) -> i32 {
    left + right
  }
}

Implementing the consumer for collect()

Now that we’ve built up this generic framework for consumers, let’s put it to use by defining a second consumer. This time I want to define how collect() works; just like in sequential iterators, collect() allows users to accumulate the parallel items into a collection. In this case, we’re going to examine one particular variant of collect(), which writes values into a vector:

let c: Vec<_> =
  vec1.par_iter()
      .zip(vec2.par_iter())
      .map(|(i, j)| i * j)
      .collect(); // <-- only thing different

In fact, internally, Rayon’s collect() for vectors is written in terms of a more efficient primitive, collect_into(). collect_into() takes a mutable reference to a vector and stores the results in there: this allows you to re-use a pre-existing vector and avoid allocation overheads. It’s particularly good for double buffering scenarios. To use collect_into() explicitly, one would write something like:

  let mut c: Vec<_> = vec![];
  vec1.par_iter()
      .zip(vec2.par_iter())
      .map(|(i, j)| i * j)
      .collect_into(&mut c);

collect_into() first ensures that the vector has enough capacity for the items in the iterator and then creates a particular consumer that, for each item, will store it into the appropriate place in the vector.

We’re going to walk through a simplified version of the collect_into() consumer. This version will be specialized to vectors of i32 values; moreover, it’s going to avoid any use of unsafe code and just assume that the vector is initialized to the right length (perhaps with 0 values). The real version works for arbitrary types and avoids initialization by using a dab of unsafe code (just about the only unsafe code in the parallel iterators part of Rayon, actually).

Let’s start with the type definitions for the consumer, folder, and reducer. They look like this:

struct I32CollectVecConsumer<'c> {
  data: &'c mut [i32],
}
struct I32CollectVecFolder<'c> {
  data: &'c mut [i32],
  index: usize,
}
struct I32SumReducer {
}

These type definitions kind of suggest to you an outline for this is going to work. When the consumer starts, it has a mutable slice of integers that it will eventually store into (the &'c mut [i32]); the lifetime 'c here represents the span of time in which the collection is happening. Remember that in Rust a mutable reference is also a unique reference, which means that we don’t have to worry about other threads reading or messing with our array while we store into it.

When the time comes to switch to the folder, we still have a slice to store into, but now we also have an index. That tracks how many items we have stored thus far.

Finally, the reducer struct is empty, because once the values are stored, there really isn’t any data to reduce. For collect, the reduction step will just be a no-op.

OK, let’s see how the consumer trait is defined. The idea here is simple: each time the consumer is split at some index N, it splits its mutable slice into two halves at N, and returns two consumers, one with each half:

impl<'c> Consumer for I32VecCollectConsumer<'c> {
  type Folder = I32VecCollectFolder<'c>;
  type Reducer = I32VecCollectReducer;
  
  // The "result" of a `collect_into()` is just unit.
  // We are executing this for its side effects.
  type Result = ();
  
  fn split_at(self, index: usize) -> (Self, Self, Self::Reducer) {
    // Divide the slice into two halves at `index`:
    let (left, right) = self.data.split_at_mut(index);
    
    // Construct the new consumers:
    (I32VecCollectConsumer { data: left },
     I32VecCollectConsumer { data: right },
     I32VecCollectReducer { })
  }

  // When we convert to a folder, give over the slice and start
  // the index at 0.
  fn into_folder(self) -> Self::Folder {
    I32VecCollectFolder { data: self.data, index: 0 }
  }
}

The folder trait is also pretty simple. Each time we consume a new integer, we’ll store it into the slice and increment index:

impl Folder<i32> for I32SumFolder {
  type Result = ();
  
  fn consume(self, item: i32) -> Self {
    self.data[self.index] = item;
    I32CollectVecFolder { data: self.data, index: self.index + 1 }
  }
    
  fn complete(self) {
  }
}

Finally, since collect_into() has no result, the “reduction” step is just a no-op:

impl Reducer<()> for I32CollectVecFolder {
  fn reduce(self, _left: (), _right: ()) {
  }
}

Conclusion

This post continued our explanation of how Rayon’s parallel iterators work. Whereas the previous post introduced parallel producers, this post showed how we can abstract out parallel consumers as well. Parallel consumers basically represent the “parallel actions” at the end of a parallel iterator, like sum() or collect().

Using parallel consumers allows us to have one common routine, bridge_producer_consumer(), that is used to draw items from a producer and feed them to a consumer. This routine thus defines precisely the parallel logic itself, independent from any particular parallel iterator. In future posts, we’ll discuss a bit how that same routine can also use some adaptive techniques to try and moderate splitting overhead automatically and dynamically.

I want to emphasize something about this post and the previous one: you may have noticed a general lack of unsafe code. One of the very cool things about Rayon is that the vast majority of the unsafety is confined to the join() implementation. For the most part, the parallel iterators just build on this new abstraction.

It is hard to overstate the benefits of confining unsafe code in this way. For one thing, I’ve caught a lot of bugs in the iterator code I was writing. But even better, it means that it is relatively easy to unit test and review parallel iterator PRs. We don’t have to worry about crazy data-race bugs that only crop up if we test for hours and hours. It’s enough to just make sure we use a variant of bridge_producer_consumer() that splits very deeply, so that we test the split/recombine logic.

http://smallcultfollowing.com/babysteps/blog/2016/11/14/parallel-iterators-part-3-consumers/


Christopher Arnold

Понедельник, 14 Ноября 2016 г. 05:31 + в цитатник


At last year’s Game Developers Conference I had the chance to experience new immersive video environments that are being created by game developers releasing titles for the new Oculus and HTC Vive and Google Daydream platforms.  One developer at the conference, Opaque Mulitimedia, demonstrated "Earthlight" which gave the participant an opportunity to crawl on the outside of the International Space Station as the earth rotated below.  In the simulation, a Microsoft Kinect sensor was following the position of my hands.  But what I saw in the visor was that my hands were enclosed in an astronaut’s suit.  The visual experience was so compelling that when my hands missed the rungs of the ladder I felt a palpable sense of urgency because the environment was so realistically depicted.  (The space station was rendered as a scale model of the actual space station using the "Unreal" game physics engine.)  The experience was so far beyond what I’d experienced a decade ago with the crowd-sourced simulated environments like Second Life, where artists create 3D worlds in a server-hosted environment that other people could visit as avatars.  

Since that time I’ve seen some fascinating demonstrations at Mozilla’s Virtual Reality developer events.  I’ve had the chance to witness a 360 degree video of a skydive, used the WoofbertVR application to visit real art gallery collections displayed in a simulated art gallery, spectated a simulated launch and lunar landing of Apollo 11, and browsed 360 photography depicting dozens of fascinating destinations around the globe.  This is quite a compelling and satisfying way to experience visual splendor depicted spatially.  With the New York Times and  iMax now entering the industry, we can anticipate an incredible surfeit of media content to take us to places in the world we might never have a chance to go.

Still the experiences of these simulated spaces seems very ethereal.  Which brings me to another emerging field.  At Mozilla Festival in London a few years ago, I had a chance to meet Yasuaki Kakehi of Keio University in Japan, who was demonstrating a haptic feedback device called Techtile.  The Techtile was akin to a microphone for physical feedback that could then be transmitted over the web to another mirror device.  When he put marbles in one cup, another person holding an empty cup could feel the rattle of the marbles as if the same marble impacts were happening on the sides of the empty cup held by the observer.  The sense was so realistic, it was hard to believe that it was entirely synthesized and transmitted over the Internet.  Subsequently, at the Consumer Electronics Show, I witnessed another of these haptic speakers.  But this one conveyed the sense not by mirroring precise physical impacts, but by giving precisely timed pulses, which the holder could feel as an implied sense of force direction without the device actually moving the user's hand at all.  It was a haptic illusion instead of a precise physical sensation.

As haptics work advances it has potential to impact common everyday experiences beyond the theoretical and experimental demonstrations I experienced.  This year haptic devices are available in the new Honda cars on sale this year as Road Departure Mitigation, whereby steering wheels can simulate rumble strips on the sides of a lane just by sensing the painted lines on the pavement with cameras.
I am also very excited to see this field expand to include music.  At Ryerson University's SMART lab, Dr. Maria Karam, Dr. Deborah Fels and Dr. Frank Russo applied the concepts of haptics and somatosensory depiction of music to people who didn't have the capability of appreciating music aurally.  Their first product, called the Emoti-chair breaks the frequency range of music to depict different audio qualities spatially to the listeners back.  This is based on the concept that the human cochlea is essentially a large coiled surface upon which sounds of different frequencies resonate and are felt at different locations.  While I don't have perfect pitch, I think having a spatial-perception of tonal scale would allow me to develop a cognitive sense of pitch correctness to compensate using a listening aid like this.  Fortunately, Dr. Karam is advancing this work to introduce new form factors to the commercial market in coming years.

Over many years I have had the chance to study various forms of folk percussion.  One of the most interesting drumming experiences I have had was a visit to Lombok, Indonesia where I had the chance to see a Gamelan performance in a small village along with the large Gendang Belek drums accompanying.  The Gendang Belek is a large barrel drum worn with a strap that goes over the shoulders.  When the drum is struck the reverberation is so fierce and powerful that it shakes the entire body, by resonating through the spine.  I had an opportunity to study Japanese Taiko while living in Japan.  The taiko, resonates in the listener by resonating in the chest.  But the experience of bone-conduction through the spine is altogether a more intense way to experience rhythm.

Because I am such an avid fan of physical experiences of music, I am frequently gravitating toward bassey music.  I tend to play it in a sub-woofer-heavy car stereo, or seek out experiences to hear this music in nightclub or festival performances where large speakers animate the lower frequencies of music.  I can imagine that if more people had the physical experience of drumming that I've had, instead of just the auditory experience of it, more people would enjoy making music themselves.

As more innovators like TADs Inc. (an offshoot of the Ryerson University project) bring physical experiences of music to the general consumer, I look forward to experiencing my music in greater depth.










http://ncubeeight.blogspot.com/2016/11/at-last-years-game-developers.html


Andreas Gal: Trump is dangerous but his supporters are not the enemy

Понедельник, 14 Ноября 2016 г. 01:12 + в цитатник

On November 8th, my chosen home has elected a racist, sexist, nativist, know-nothing, don’t care to know anything, narcissistic buffoon for president. During his campaign, Trump has made many outrageous statements and promises that are completely idiotic. I won’t bore you with trying to enumerate them. I am horrified and appalled that this orange circus peanut is our next president. I want to do more than just be upset about it, and I decided I’ll start with talking about it.

First to my fellow liberal citizens: please stop vilifying people who voted for Trump. They are not the problem and they are predominantly not like him. The world is globalizing and changing quickly, causing uncertainty and fear for many. That doesn’t make them bad people. In fact, they are the only people who can save us from Trump whenever the next election comes around. We need to embrace them, engage them, and try to convince them that there is a better way than Trump’s way.

Second, I would like to address my fellow citizens who voted Trump: You want change. I get that. I want change too. I agree with much of your resentment of Washington. I even agree that Hillary was a really uninspiring candidate (though I do think she would have made an ok president). The problem is that the guy you voted for is not going to change things for the better for you. Don’t believe me. Just believe him. Trump has a lifetime history of exploiting the weak and poor to enrich himself. He has bragged in the past how he exploits his influence to bend the law for profit, and how he exploits his fame to assault and degrade women. Stop justifying his behavior and stop pretending he’ll be any better as president than he has been as non-president for 70 years. Best case he’ll be just as bad as he was so far in his life. Worst case, he’ll be worse, and we’ll all pay the price.

I believe in Democracy. Trump is our president-elect. He’ll assume the office of the president on January 20, and all indications so far point towards a pretty disastrous presidency. It won’t be the end of the world as we know it, but it’s clear he meant every vile word he said as a candidate. He just confirmed he wants to “deport 2-3 million illegal immigrants” immediately. Thats almost 1% of the US population. And while he claims we’ll only deport criminals, just pause for a moment and think about the scale of this. He’ll go ahead and deport 3,000,000 individuals. Yes, that’s 6 zeroes. If we pack 30 people into a bus, thats 100,000 bus trips. And if we want to uphold our constitution and due process, judges we’ll have to order 3 MILLION TIMES to deport someone. The scale of this operation is absurd, and even if we get it right 99% of the time, we’ll end up deporting tens of thousands of U.S. citizens who don’t speak English well, or didn’t hire the right lawyer to defend them, or didn’t have the right paperwork, just as Operation Wetback did in the dark past.

Of course if you ask Donald Trump, he’ll tell you none of this will be the case, because he knows how to do all of this and it’ll be terrific and great. And this is the biggest problem with Donald Trump. He just isn’t that bright apparently, and pretty much believes in magic. Narcissists often do. Trump believes he is infallible, he believes he knows everything better, and he habitually ignores reality and facts. Unfortunately thats not how the real world works, and if you let someone like that steer the country, the consequences will be very real and very painful for a lot of people.

There is a a very high chance that we’ll have to resist Donald Trump. And I don’t mean in a violent sense. We are Americans. We cherish our democracy. So let’s stop talking about revolution. Donald Trump will have to be opposed peacefully and forcefully and legally, by convincing the majority of this country that Donald Trump’s way is not the American way. And, quite frankly, it’ll likely come down to all of us individually. I have very little faith in the GOP being able to stand up to Donald Trump’s authoritarian impulses. The GOP is Trump’s party now. Many in the GOP who seem like reasonable human beings have embraced Trump because they simply don’t have the backbone to oppose someone like Donald Trump. Paul Ryan is the best example of this. He has folded to Trump’s language and agenda time and time again. So don’t get your hopes up if Ryan says there won’t be a deportation force. Trump will ratchet up his aggressive language, and Ryan will fall in line. This has happened too many times before to hope it’ll change.

So its on us now as Americans to stand up for who we are. We are not Trump, even though he’ll be our president for some time. We may be flawed sometimes, but at our core we are a patriotic, civil, and brave people who believe in freedom and opportunity for everyone. I wasn’t born here but that’s why I decided to live here. I am proud to be an American and I am proud of my fellow Americans. We are all in this together, whether you voted for him or her. As long we don’t forget that, no harm will come to our country.

PS: Trump named a white nationalist as his senior advisor a couple hours after I posted this. Please wake up if you still think this isn’t going to be as bad as it seems.


Filed under: Politics

https://andreasgal.com/2016/11/13/trump-is-dangerous-but-his-supporters-are-not-the-enemy/


Daniel Pocock: Are all victims of French terrorism equal?

Воскресенье, 13 Ноября 2016 г. 13:50 + в цитатник

Some personal observations about the terrorist atrocities around the world based on evidence from Wikipedia and other sources

The year 2015 saw a series of distressing terrorist attacks in France. 2015 was also the 30th anniversary of the French Government's bombing of a civilian ship at port in New Zealand, murdering a photographer who was on board at the time. This horrendous crime has been chronicled in various movies including The Rainbow Warrior Conspiracy (1989) and The Rainbow Warrior (1993).

The Paris attacks are a source of great anxiety for the people of France but they are also an attack on Europe and all civilized humanity as well. Rather than using them to channel more anger towards Muslims and Arabs with another extended (yet ineffective) state of emergency, isn't it about time that France moved on from the evils of its colonial past and "drains the swamp" where unrepentant villains are thriving in its security services?

Francois Hollande and S'egol`ene Royal. Royal's brother G'erard Royal allegedly planted the bomb in the terrorist mission to New Zealand. It is ironic that Royal is now Minister for Ecology while her brother sank the Greenpeace flagship. If Francois and S'egol`ene had married (they have four children together), would G'erard be the president's brother-in-law or terrorist-in-law?

The question has to be asked: if it looks like terrorism, if it smells like terrorism, if the victim of that French Government attrocity is as dead as the victims of Islamic militants littered across the floor of the Bataclan, shouldn't it also be considered an act of terrorism?

If it was not an act of terrorism, then what is it that makes it differ? Why do French officials refer to it as nothing more than "a serious error", the term used by Prime Minister Manuel Valls during a recent visit to New Zealand in 2016? Was it that the French officials felt it was necessary for Libert'e, 'egalit'e, fraternit'e? Or is it just a limitation of the English language that we only have one word for terrorism, while French officials have a different word for such acts carried out by those who serve their flag?

If the French government are sincere in their apology, why have they avoided releasing key facts about the atrocity, like who thought up this plot and who gave the orders? Did the soldiers involved volunteer for a mission with the code name Op'eration Satanique, or did any other members of their unit quit rather than have such a horrendous crime on their conscience? What does that say about the people who carried out the orders?

If somebody apprehended one of these rogue employees of the French Government today, would they be rewarded with France's highest honour, like those tourists who recently apprehended an Islamic terrorist on a high-speed train?

If terrorism is such an absolute evil, why was it so easy for the officials involved to progress with their careers? Would an ex-member of an Islamic terrorist group be able to subsequently obtain US residence and employment as easily as the French terror squad's commander Louis-Pierre Dillais?

When you consider the comments made by Donald Trump recently, the threats of violence and physical aggression against just about anybody he doesn't agree with, is this the type of diplomacy that the US will practice under his rule commencing in 2017? Are people like this motivated by a genuine concern for peace and security, or are these simply criminal acts of vengence backed by political leaders with the maturity of schoolyard bullies?

https://danielpocock.com/are-all-victims-of-french-terrorism-equal


Cameron Kaiser: 45.5.0 final available

Воскресенье, 13 Ноября 2016 г. 02:13 + в цитатник
The final release of TenFourFox 45.5.0 (downloads, hashish, er, hashes, release notes) is available. Pretty much everything made it, including the hybrid-endian JavaScript engine (the LE portion of IonPower-NVLE), the AltiVec VP9 IDCT/IADST/IHT transformations, the MP3 refactoring and the new custom in-browser prefpane. There is also a fix for PostScript-based front blocking which apparently glitched in 45. Assuming all goes well and there are no major regressions, this will go live either late Sunday or early Monday due to a planned power outage which will affect Floodgap on Tuesday.

Meanwhile, I still don't have a good understanding of what's wrong with Amazon Music (still works great in 38.10), nor the issue with some users being unable to make changes to their default search engine stick. This is the problem with a single developer, folks: what I can't replicate I can't repair. I have a couple other theories in that thread for people to respond to.

Next up will be actually ripping some code out for a change. I'm planning to completely eviscerate telemetry support since we have no infrastructure to manage it and it's wasted code, as well as retina Mac support, since no retina Mac can run 10.6. I don't anticipate these being major speed boosts but they'll help and they'll make the browser smaller. Since we don't have to maintain good compatibility with Mozilla source code anymore I have some additional freedom to do bigger surgeries like these. I'll also make a first cut at the non-volatile portion of IonPower-NVLE by making floating point registers in play non-volatile (except for the volatiles like f1 that the ABI requires to be live also); again, not a big boost, but it will definitely reduce stack pressure and should improve the performance of ABI-compliant calls. User agent switching and possibly some more AltiVec VP9 work are also on the table, but may not make 45.6.

The other thing that needs to be done is restoring our ability to do performance analysis because Shark and Sample on 10.4 freak out trying to resolve symbols from these much more recent gcc builds. The solution would seem to be a way to get program counter samples without resolving them, and then give that to a tool like addr2line or even gdb7 itself to do the symbol resolution instead, but I can't find a way to make either Shark or Sample not resolve symbols. Right now I'm disassembling /usr/bin/sample (since Apple apparently doesn't offer the source code for it) to see how it gets those samples and it seems to reference a mysterious NSSampler in the CHUD VM tools private framework. Magic Hat can dump the class but the trick is how to work with it and which selectors it will allow. More on that later.

http://tenfourfox.blogspot.com/2016/11/4550-final-available.html


Emma Irwin: Diversity & Inclusion for Participation – “A Plan for Strategy”

Суббота, 12 Ноября 2016 г. 04:17 + в цитатник

9227517706_95000a7580_b

In the most recent Heartbeat, I consulted with Mozilla’s Diversity & Inclusion lead Larissa Shapiro, and others championing the discussion , about a strategy for D&I in Participation. I’m really excited and passionate about this work, and even though this is very, very early (this is only a plan for a strategy), I wanted to share now for the opportunity of gathering the most feedback.

Note: I’m using screenshots from a presentation, but have included the actual text in image alt-tags for accessibility.

Right now the proposed ‘Plan for a Strategy’ as three phases:

Page-Shot-2016-11-11 Diversity Inclusion for Participation

Designing a strategy for D&I will have some unique challenges. We know this. To get started we need to understand where we are now, who we are, why we are as we are — and what attitudes and practices exist that enhance, or restrict our ability to effectively bring in, and sustain the participation of diverse groups.

The first phase is all about gaining insights into these and other important questions through focus groups, interviews and – and existing data.

Page-Shot-2016-11-11 Diversity Inclusion for Participation(1)

Insight gathering and research will be focused in these key areas:

Page-Shot-2016-11-11 Diversity Inclusion for Participation(4)

By Phase 2 – we’ll have formed a number of important hypothesis for influencing D&I  in investment areas aligned with Mozilla’s overall D&I strategy.  Investment areas are currently proposed to be:

Page-Shot-2016-11-11 Diversity Inclusion for Participation(10)

Experimentation is critical to developing a D&I Strategy for Participation.  And although it’s identified here as a single ‘phase’, I envision experimentation, learning and iterating on what we learn – to be  THE  process of building a diverse and inclusive Participation at Mozilla.

Page-Shot-2016-11-11 Diversity Inclusion for Participation(11)

Here’s the current timeline:

  1. Feedback on this plan  – Ongoing, but especially useful leading up to December 5th
  2. Phase 1 – Gaining Insights. Begins the week of November 14th leading into the Mozilla All Hands meeting in December.
  3. Phase 2Early Experiment Design -Mozilla All Hands Meeting in December.
  4. Phase 2 – Experiment Design & Implementation  – Remainder of of 2016 into 2017.
  5. Phase 3 – Strategy Development – 2017.

 

I would love to hear your ideas, concerns, feedback on this ‘proposal’ which WILL itself evolve.

 

FacebookTwitterGoogle+Share

http://tiptoes.ca/diversity-inclusion-for-participation-a-plan-for-strategy/


Daniel Stenberg: curl and TLS 1.3

Суббота, 12 Ноября 2016 г. 02:31 + в цитатник

Draft 18 of the TLS version 1.3 spec was publiSSL padlockshed at the end of October 2016.

Already now, both Firefox and Chrome have test versions out with TLS 1.3 enabled. Firefox 52 will have it by default, and while Chrome will ship it, I couldn’t figure out exactly when we can expect it to be there by default.

Over the last few days we’ve merged TLS 1.3 support to curl, primarily in this commit by Kamil Dudka. Both the command line tool and libcurl will negotiate TLS 1.3 in the next version (7.52.0 – planned release date at the end of December 2016) if built with a TLS library that supports it and told to do it by the user.

The two current TLS libraries that will speak TLS 1.3 when built with curl right now is NSS and BoringSSL. The plan is to gradually adjust curl over time as the other libraries start to support 1.3 as well. As always we will appreciate your help in making this happen!

Right now, there’s also the minor flux in that servers and clients may end up running implementations of different draft versions of the TLS spec which contributes to a layer of extra fun!

Three TLS current 1.3 test servers to play with: https://enabled.tls13.com/ , https://www.allizom.org/ and https://tls13.crypto.mozilla.org/. I doubt any of these will give you any guarantees of functionality.

TLS 1.3 offers a few new features that allow clients such as curl to do subsequent TLS connections much faster, with only 1 or even 0 RTTs, but curl has no code for any of those features yet.

https://daniel.haxx.se/blog/2016/11/12/curl-and-tls-1-3/


Air Mozilla: Foundation Demos November 11 2016

Пятница, 11 Ноября 2016 г. 22:34 + в цитатник

Robert Kaiser: My Thoughts on Next-Generation Themes

Пятница, 11 Ноября 2016 г. 21:03 + в цитатник
One of the very first steps into my Mozilla contribution story was playing around with the CSS files that styled how the early Mozilla suite looked. Due to the user interfaces ingeniously using the the same rendering engine as Mozilla needed for websites anyhow, it meant that I actually understood some of the underpinnings and could hack them myself - like changing some colors and icons into a look similar to LCARS - which I always found to be awesome-looking, and whose creator, Mike Okuda, I have met in person meanwhile. When I later assembled those playings-around into a proper theme, I called that LCARStrek, and that one is still around for Firefox and SeaMonkey nowadays. I also did an adaptation of the theme that the Mozilla suite had in the early days, which is called EarlyBlue and only available for SeaMonkey (too much work to adopt and esp. maintain it for Firefox as well right now, though it would be a fun one as well).

Image No. 23308

Even LCARStrek, which I'm using myself on both products, is often late to release new versions as it's a real lot of work to maintain it - not just because it changes the looks of the browser rather radically and has a lot of details to pay attention to, but also because the current way how full themes work needs me to copy a whole lot of CSS from the default theme into my work, and painstakingly track all changes and adapt to them. With some larger work in Firefox recently and the constant flux of Developer Tools work, this is a real lot of work and not a lot of fun (and I already pretty much leave out any support for devtools or devedition themes as well as lightweight/wallpaper themes). I'm not alone with this, and there are only slightly over 30 complete themes on AMO that have been updated in the last 3 months - even though you need to adapt to changes in every Firefox release, i.e. every 6-8 weeks at least.

This is something that the Mozilla teams working with theming have noticed as well, and talks have been going on for a long time to change how themes work to both make it easier to maintain the themes and to also make Firefox break less significantly when a themes is not updated all the time. Also, as with add-ons in general, Mozilla wants less risk to breaking people's customization experiences with the shift to more HTML UI (instead of XUL), Project Quantum and similar updates of Firefox' technology, and themes need to be modernized in that light as well.

Having been a theme maintainer for more than a decade and a core Mozilla contributor (even on staff for a few years), I naturally have my thoughts on what the new theme architecture should be.
As a general rule, I'd like a future theme architecture to be simple where possible, but if you want to, powerful enough to make radically different designs like LCARStrek possible.
I imagine building upon what we have for "lightweight themes" (or "wallpaper themes" called "Personas" in the past, AMO just calls them "themes" now), and extending this with functionality for changing browser colors in general, potentially to exchange icons, and, for those that really need it, with in-depth CSS-powered styling.

Here's a list of things I'd like to see in the underpinnings this next-generation themes architecture:
  • Use CSS variables for all colors in Firefox, and expose some simple way for a theme to only adjust those colors. A lot of people will be happy with just potentially a "wallpaper" and a changed color scheme across the whole Firefox UI (also, not just the browser window).
  • Make all icons be SVGs (if possible), create some way to apply above-mentioned colors to those icons. It's so easy for icon colors to clash with theme colors, they should instead just fit themselves into the theme color scheme nicely.
  • Create some way to easily exchange specific icons - some themes only want to adjust certain icons and not all of them, and we also should not break when Firefox adds icons. Also, some themes only want to apply a different set of icons, e.g. to match an operating system's icon scheme, we should enable that with needing to do everything else as well.
  • For those that want to fiddle with the details, have one theme-defined CSS stylesheet in addition to (not instead of) the default theme CSS - just make sure it's always loaded after Firefox' own styles so overriding rules does not necessarily need !important (as the last rule of the same specificity wins). Support @document for those theme designers that want some rules to only apply in one HTML/XUL document of the product. Also, for those that want to define a whole lot of rules, it should be possible for them to split that one stylesheet and @import the parts in that one (but let's hope that's not needed too much).
  • Pretty please make the devedition and devtools theme selections use the actual Firefox theming mechanism and not add even more complexity for theme designers to take care of.

With that structure, we'd have easy mechanisms for those that only want to change colors and/or icons, which are use cases we see a lot from what I remember in past theme discussions.
That said, we'd also have a mechanism to go and adjust all the nasty details that I know I want to have in LCARStrek - with full knowledge that anyone who uses the advanced option of the theme CSS stylesheet makes maintenance harder for themselves - but still easier than now, as loading this in addition and after the default theme CSS eliminates all the tiring porting of the rules you need anyhow and leaves the theme author with the really interesting pieces of what their theme changes in comparison to the default.

If it's possible to get the effects that I want with LCARStrek, I'll stay one of the theme authors that use a lot of the power of what the system can do - and my time spent in maintaining will still be significant, albeit definitely less than it is now. That said, if I can't achieve a look that is neat to LCARS, I'll probably just not do themes again in the future. As I love this look though, I hope the simple but powerful architecture I'd like will be implemented - that could be what I proposed above, could potentially be achieved in different ways as well, I guess. I care mostly about the outcome.

Let's have a Firefox that can be distinguished by how powerful its customization options are while still making it fun to maintain add-ons and themes and to develop Firefox into a more modern application for using our beloved Web!

http://home.kairo.at/blog/2016-11/my_thoughts_on_next_generation_themes


About:Community: Firefox 50 new contributors

Пятница, 11 Ноября 2016 г. 21:01 + в цитатник

With the release of Firefox 50, we are pleased to welcome the 43 developers who contributed their first code change to Firefox in this release, 32 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

https://blog.mozilla.org/community/2016/11/11/firefox-50-new-contributors/


Christian Heilmann: Hacking Oredev’s after hours: Sharing our Coder Privilege (video, slides, talking points)

Пятница, 11 Ноября 2016 г. 19:04 + в цитатник

The original plan at the first evening of this year’s Oredev was for me to interview Peter Sunde about the history of Pirate Bay as covered in his SmashingConf Barcelona “Technology Is Neither Good Nor Bad — You Are” talk.

As Peter couldn’t come and the massive news of the US or the voting system choosing Donald Trump as the president I quickly changed my plans. Instead, I wrote a talk explaining the very random way I got to become a professional developer and that it is our duty as privileged people now to share our knowledge with those not as lucky.

After the talk I invited a very distraught Rob Conery, author of The Imposter’s Handbook to help share some cheerful and amusing anectodes in his history. We ended up with some actionable ideas how to learn more and not listen to the inner voice that keeps telling us we’re not good enough.

Here’s the video of the hour of information on Vimeo:

The slides of the talk are on Slideshare.

Here are some of the points of the slides:

Things I learned

  • Nothing can hold you back when you are good at analysing and repeating
  • Everything you see on screen came from somewhere – it is never set in stone
  • It is much more fun to explore and tweak than to get something handed to you
  • Working in a limited/unknown environment is a wonderful challenge
  • You don’t need to feel limited by the environment you target – you can use whatever you want to create for it
  • The more people do this, the more best practices can be shared.

Hello View Source

  • A big part of my success on the web was using view source and reverse engineering
  • We all did, don’t let people tell you otherwise
  • The lack of distance between creation and consumption was really down my alley…
  • These days, developer tools have replaced view source
  • We have incredible insight into what our code does in the browser
  • Of course, not everybody is ready for this…

Here is where we come in.

  • We are at the forefront of online media
  • We are creators and makers – not consumers
  • We have the privilege of open tools, an open platform and openly available documentation.

Getting started has never been easier…

  • Using GitHub, you can host your code, collaborate, execute your projects, write collaborative documentation and books…
  • Using social media we can promote these products, share knowledge and invite people to learn…

You’re building on existing solutions…

  • You don’t need to start from scratch – you can contribute to thousands of existing projects – many aimed to teach people how to become a web maker.
  • You don’t even need to code. You can help with UX, or document, or herd communities.

One main thing i learned in my whole career…

  • You learn best by teaching
  • Sharing and making people grow with you is the best feeling ever
  • If you feel down and “not good enough”, create something – anything!

Use your frustration, your anger and your deviousness for good…

  • What we need more than ever right now is education
  • Traditional education is encumbered by privilege and costs
  • We’ve been lucky – it is time we give back

The web is the most versatile and non-elite platform. Go and make your mark!

https://www.christianheilmann.com/2016/11/11/hacking-oredevs-after-hours-sharing-our-coder-privilege-video-slides-talking-points/


Mark Banner: WebExtensions: An Example Add-on Repository with Test Harnesses

Пятница, 11 Ноября 2016 г. 12:18 + в цитатник

I’ve created an example repository for how you might set up tools to help development of a WebExtension. Whilst there are others around, I’ve not heard of one that includes examples of tools for testing and auditing your extension.

It is based on various ideas from projects I’ve been working alongside recently.

The repository is intended to either be used as a starting point for constructing a new WebExtension, or you can take the various components and integrate them into your own repository.

It is based around node/npm and the web-ext command line tool to keep it simple as possible. In addition it contains setup for:

All of these are also run automatically on landing or pull request via Travis Ci with Coveralls providing code coverage reports.

Finally, there’s a tool enabled on the repository for helping to keep modules up to date.

If you find it helpful, let me know in the comment section. Please raise any issues that you find, or submit pull requests, I welcome either.

https://blog.mozilla.org/standard8/2016/11/11/webextensions-an-example-add-on-repository-with-test-harnesses/


Aki Sasaki: dephash 0.3.0

Пятница, 11 Ноября 2016 г. 09:02 + в цитатник
It's been a while since I released dephash. Long enough that pip 9.0.1 and hashin 0.7.1 were released... I had to land some fixes to support those.

We now have dephash 0.3.0! (github) (changelog) (pypi)

Also, I'm now watching the github repo, so I should see any new issues or pull requests =\

comment count unavailable comments

http://escapewindow.dreamwidth.org/249112.html



Поиск сообщений в rss_planet_mozilla
Страницы: 472 ... 312 311 [310] 309 308 ..
.. 1 Календарь