-Поиск по дневнику

Поиск сообщений в rss_planet_mozilla

 -Подписка по e-mail

 

 -Постоянные читатели

 -Статистика

Статистика LiveInternet.ru: показано количество хитов и посетителей
Создан: 19.06.2007
Записей:
Комментариев:
Написано: 7

Planet Mozilla





Planet Mozilla - https://planet.mozilla.org/


Добавить любой RSS - источник (включая журнал LiveJournal) в свою ленту друзей вы можете на странице синдикации.

Исходная информация - http://planet.mozilla.org/.
Данный дневник сформирован из открытого RSS-источника по адресу http://planet.mozilla.org/rss20.xml, и дополняется в соответствии с дополнением данного источника. Он может не соответствовать содержимому оригинальной страницы. Трансляция создана автоматически по запросу читателей этой RSS ленты.
По всем вопросам о работе данного сервиса обращаться со страницы контактной информации.

[Обновить трансляцию]

Ben Kero: Trying out RemixOS

Среда, 13 Января 2016 г. 22:39 + в цитатник
The RemixOS boot logoThe RemixOS boot logo

I’ve always been one for trying out new operating systems, so when I heard news about the latest desktop-conversion effort from Jide I wanted to give it a try.

RemixOS is a proprietary offering based on the work of android-x86, which aims to bring the stock Android experience to commodity PCs. RemixOS adds on interface and convenience changes to make the operating system more usable on PC hardware. This includes UI changes such as multi-windows and a classic ‘desktop’.

The Alpha for PC was released this morning, and can be downloaded here. There was also a leaked version that landed a couple days earlier. If you’ve seen reviews online, most of them came from this. What follows are my impressions of the experience.

Installation

In my effort to trial this, I’ve downloaded a copy and flashed it to a USB drive. Jide helpfully includes a Windows application to flash the ISO to a USB device. The process is simpler on Linux:

$ sudo dd if=remixos.iso of=/dev/sdb bs=1M

Since I like to use EFI booting on my ThinkPad, I’ve disabled legacy booting in my BIOS. Since EFI booting is not supported in this release (although EFI/grub is included on the CD) I would need to enable legacy booting.

After enabling legacy booting it was simply a matter of pressing F12 during the boot process and selecting my USB drive.

Booting

Your old pal isolinux is the first to greet youYour old pal isolinux is the first to greet you

I was greeted by an old-school isolinux boot menu asking me to choose between ‘Guest mode’, which a la LiveCD will discard all of my saved information and settings upon reboot. The other option is ‘Resident mode’, which will save data, although I’m not sure what mechanism it uses yet. I’m afraid it might partition and format my internal drives. Some more testing with a VM is warranted.

After choosing Guest mode, the system changed to an equally old-school Linux loading boot framebuffer. I know this is an alpha, but I hope the boot experience is something that they eventually get right. I’d love the booting experience for this to be as fast and seamless as other modern operating systems.

Not quite a seamless boot processNot quite a seamless boot process

The system took about 30 seconds to boot, which is understandable because of USB. After I installed on an internal SSD I noticed the boot performance was equally bad. I’d love to get or make a Bootchart of why this is.

Software

After watching a pulsing ‘RemixOS logo’ I was greeted with the welcome screen. All was well. My display worked at proper resolution. My keyboard and mouse both functioned as expected, albeit with an inverted scroll direction to the mouse. The important part was that everything worked.

The welcome screen asked me my locale and assisted me with WiFi setup for the first time. After a few clicks of ‘Next’ I was done.

The Desktop, apologies for the green bar (VM artifact)The Desktop, apologies for the green bar (VM artifact)

What greeted me afterwards was the RemixOS desktop. It has many of common features that one comes to expect such as a taskbar on the bottom, icons on a desktop, and a ‘start button’-esque app drawer.

The leaked pre-release copy contained the Google Play Store, which made installation of apps much easier. For the official release this has been removed, so another store must be used. I chose to use the open source-centric F-Droid store. Unfortunately upon opening it revealed that there were no available apps. I figure this is due to being an x86 host instead of the usual ARM architecture.

Overall the interface is very snappy, as it should be while running on a piece of modern hardware. Apps install very quickly, menus appear and disappear surprisingly fast, and switching between apps is instantaneous. During testing I encountered several times at which the system’s screen turned black and remained unresponsive for upwards of 5 seconds. This always coincided with times where I was either closing an app or switching focus to another.

A custom-written file manager is includedA custom-written file manager is included

Some of the design elements are still designed for mobile. Things like some menus and dialog boxes are not as fast to navigate with a mouse or keyboard. Keyboard shortcuts are also lacking. For instance, ctrl+L will not select the URL bar in a web browser. There is no ‘Esc’ key. Instead, this is bound to Android’s ‘Back’ button.

The settings menuThe settings menu

Browsers proved to be very frustrating. The stock browser would not respond to touchpad scrolling, but did respond to arrow keys. Likewise, Firefox did not respond to arrow keys, but did respond to touchpad scrolling. The overall performance of the browsers themselves were very good, but the input interaction did not have the same level of polish. Having used these browsers on mobile devices, I know this problem has to be with the RemixOS’s handling of my different input devices.

Viewing Slashdot on mobile firefoxViewing Slashdot on mobile Firefox

Viewing web sites has been a crapshoot as well, since many sites will automatically serve a mobile version when detecting the string ‘Android’ in the user agent. This results in some hilarious full-screen sized ads on a 14'' monitor. Adaway would help, but unfortunately root is not included in this ROM.

Oh god, the ads!Oh god, the ads!

If I have some time later I would like to try rooting the system. I’m sad an obviously developer-oriented alpha doesn’t come with root, but it should be possible to add it myself.

Hardware Support

The test machine for this is a Lenovo Thinkpad T450s, non-touchscreen 1080p model with 12GB RAM and a 120GB SSD. This is a fairly standard piece of Broadwell hardware, so should be representative of a modern laptop’s experience.

Let’s start with the pluses. WiFi, Brightness control, and volume control all worked out of the box. Hooray! S3 sleep worked out of the box, but is not triggered by a lid shut. Instead it must be selected by navigating to App Drawer -> Power -> Sleep. This is a minor annoyance. I wish this were open source so I could fix this myself.

I tested Bluetooth by pairing my laptop to my phone, then sent a picture from my phone to it. My laptop successfully received the picture and I was able to open it in a built-in photo viewer.

The battery of my ThinkPad is detected in the kernel logs, but Android is not showing a battery indicator, so I have no way of telling how much capacity is remaining. This is majorly frustrating, and something that I hope they work out in future revisions. The battery in the test laptop has standard ACPI interfaces, and should be easily detected and displayed by Android’s built-in support.

The touchpad has been infinitely frustrating. While it’s a standard Synaptics touchpad, it lacks the options of a regular OS, which can disable things such as tap-to-click. While typing the article under this environment my cursor jumped around considerably.

Living With It

Over the few hours that I spent testing this, I get the impression that anybody attempting to live with this system in it’s current state is going to be frustrated by several problems. The problems that come to mind are the lack of battery status, maddening spurious tap-to-click events, and system lockup while switching apps. Again, this is an alpha and I’m sure most if not all of these problems will be fixed in a released version.

As for me, I’ve managed to partition my laptop’s second SSD, installed the system to that partition, and got it to boot. This will be featured in a future post. I look forward to attempting to root RemixOS, and improve it. I only wish that it was open source so I had an easier time improving it.

http://bke.ro/trying-out-remixos/


Air Mozilla: Mozfest volunteer wrap up party

Среда, 13 Января 2016 г. 22:15 + в цитатник

Mozfest volunteer wrap up party We are holding a MozFest 2015 Volunteer Thank You party on the 13th of January. As part of this event we are running a "...

https://air.mozilla.org/mozfest-volunteer-wrap-up-party/


Air Mozilla: The Joy of Coding - Episode 40

Среда, 13 Января 2016 г. 21:00 + в цитатник

The Joy of Coding - Episode 40 mconley livehacks on real Firefox bugs while thinking aloud.

https://air.mozilla.org/the-joy-of-coding-episode-40/


Daniel Glazman: CSS Prefixed and unprefixed properties

Среда, 13 Января 2016 г. 18:30 + в цитатник

I used to be quite a heavy user of Peter Beverloo's list of CSS properties, indicating all prefixed versions, but he stopped maintaining it a while ago, unfortunately. So I wrote my own, because I still need such data for BlueGriffon... In case you also need it, it's available from http://disruptive-innovations.com/zoo/cssproperties/ and is automatically refreshed from Gecko, WebKit and Blink sources on a daily basis.

http://www.glazman.org/weblog/dotclear/index.php?post/2016/01/13/CSS-Prefixed-and-unprefixed-properties


David Burns: Marionette Executable Release v0.6.0

Среда, 13 Января 2016 г. 17:48 + в цитатник

I have just released a new version of the Marionette, well the executable that you need to download.

The main fixes in this release is the ability to speak to Firefox and get meaningful error messages. This was a slight oversight on our part to make sure that we don't run commands out of sync. We have also added in getPageSource. This "native" call runs in the browser instead of trying to do it in the JavaScript sandbox which is what a number of the drivers were attempting. This will be added to the specification very soon.

I have also landed the update to interactions to the specification. This reads much better and has prose that makes it implementable. I suspect as the likes of Google and Microsoft start looking to implement it there will be bugs that need fixing.

Since you are awesome early adopters it would be great if we could raise bugs.

I am not expecting everything to work but below is a quick list that I know doesn't work.

  • No support for self-signed certificates
  • No support for actions
  • No support logging endpoint
  • I am sure there are other things we don't remember

Switching of Frames needs to be done with either a WebElement or an index. Windows can only be switched by window handles. This is currently how it has been discussed in the specification.

If in doubt, raise bugs!

Thanks for being an early adopter and thanks for raising bugs as you find them!

http://www.theautomatedtester.co.uk/blog/2016/marionette-executable-release-v0.6.0.html


Daniel Stenberg: Two years of Mozilla

Среда, 13 Января 2016 г. 12:05 + в цитатник

Today marks my two year anniversary of being employed by one of the greatest companies I’m aware of.

I get to work with open source all day, every day. I get to work for a company that isn’t driven by handing over profits to its owners for some sort of return on investment. I get to work on curl as part of my job. I get to work with internetworking, which is awesomely fun, hard, thrilling and hair-tearing all at once. I get to work with protocol standards like within the IETF and my employer can let me go to meetings. In the struggle for good, against evil and for the users of the world, I think I’m on the right side. For users, for privacy, for openness, for inclusiveness. I feel I’m a mozillian now.

So what did I achieve during my first two years with the dinosaur logo company? Not nearly enough of what I’ve wanted or possibly initially thought I would. I’ve faced a lot of tough bugs and hard challenges and I’ve landed and backed out changes all through-out this period. But I like to think that it is a net gain and even when running head first into a wall, that can be educational and we can learn from it and then when we take a few steps back and race forwards again we can use that knowledge and make better decision for the future.

Future you say? Yeah, I’m heading on in the same style, without raising my focus point very much and continuously looking for my next thing very close in time. I grab issues to work on with as little foresight as possible but I completely assume they will continue to be tough nuts to crack and there will be new networking issues to conquer going forward as well. I’ll keep working on open source, open standards and a better internet for users. I really enjoy working for Mozilla!

Mozilla dinosaur head logo

http://daniel.haxx.se/blog/2016/01/13/two-years-of-mozilla/


Nick Cameron: A type-safe and zero-allocation library for reading and navigating ELF files

Среда, 13 Января 2016 г. 11:32 + в цитатник

Over the Christmas break I made a start implementing an ELF library in Rust. This was interesting for me since I've not done a lot of library work before. I hope it will be interesting for you to read about it because it shows off some of the important parts of Rust, and exercises a few dark corners you might not be familiar with.

The library itself still needs a lot of work - more at the end of the post. You can find it in the xmas-elf repo.

ELF (Executable and Linkable Format) is a binary format for storing object files - both before and after linking. It is used on Linux, BSD, and a lot of other (mostly Unix-like) platforms. An ELF file starts with a header, then has an optional program header table, then a bunch of sections, which for executables (c.f., pre- linking) are grouped into segments, finally it has a section header table. The xmas-elf library reads the binary data into Rust data structures and provides functions to navigate the sections.

Zero-allocation and type-safe libraries

One of the great advantages of Rust is that you can rely on the compiler to completely track the memory behind pointers and ensure you never have a pointer to freed memory. Whilst this is easy enough to do manually for a small program, in a large program it gets really hard. And if you are passing pointers backwards and forwards across a library boundary? Manual isn't a sane option.

With the Rust borrow checker looking after you though, you can go wild. We can take data from client code, return pointers into it, keep pointers to it hanging around, whatever. We remain safe in the knowledge that the pointers will never outlive the underlying data. That means we can avoid copying data and allocating memory and make some really fast libraries. In xmas-elf, we don't allocate any memory at all1.

Rust (like C) allows the programmer to define the layout of data structures in memory. Furthermore, Rust data structures can be zero-overhead. That means we can just transmute (cast, in C terms) chunks of memory to these data structures and get a view of the raw data as structured data. Furthermore, Rust has value semantics for its data structures (called interior allocation), so a nested struct will reside inline in its owning struct in memory, there is no indirection via a pointer as in Java or many other high-level languages. So we can even transmute nested data structures.

The techniques we use in the library are not (memory) safe, the compiler cannot verify the code. A language that insisted all code must be safe could not use these techniques. However, Rust allows for unsafe blocks of code where we can. Unsafe code is not infectious - safe code can use unsafe code, but the compiler ensures that unsafety is contained, minimising the surface area for memory safety bugs.

When I (and I believe others) talk about a library being type safe, I mean something very different from when a programming language is type safe (although one could argue that if the language isn't, then the library can't be). In the library sense, I mean that one cannot use a piece of data in a way that was not intended. The classic example from traditional C libraries is where they use an integer for everything, and so a function wants a parameter which means one thing, but you pass it data about another thing (because both are just integers).

Rust lets you write type safe libraries by virtue of its rich data types (and because the language itself is strongly and statically typed). By using structs rather than offsets we prevent accessing incorrect fields, by using enums rather than integers we can't get the wrong kind of variant, and we can't get variants which don't exist, nor miss variants which do, due to exhaustive checks. The newtype pattern means that even where we must use an integer, we can make sure we don't confuse integers which mean different things.

How xmas-elf works

We start with the input data, which is just a bunch of bytes. There is a helper function (open_file) which opens a file and reads it into a Vec, but it's not a core part of the library. We deal with the source data as a slice of bytes, &'a [u8].

Rust ensures that this slice will not outlive the original data. That 'a is an upper bound on how long anything backed by the source data can live. We use this one lifetime throughout the library, threading it around functions and data structures (all of which are generic, but basically only get this one lifetime. This would be a perfect use case for lifetime-parametric modules).

The next key technique is that we can cause Rust data structures to have a precise layout in memory. That means we can take a reference to a piece of memory and transmute it into a reference to a data structure. The data structure is backed by the memory we started with, but we use it like any other data.

Rust helps us in two ways with this: it would be dangerous to mutate such data, since others could also reference it (possibly with a different view on it). By using & references instead of &mut references, Rust ensures we won't mutate the underlying data. Secondly, we can't have these data structures outlive the underlying memory. Once that memory is freed, it would be dangerous to access the data structure. By giving the reference to the data structure the same lifetime as the memory, Rust ensures this won't happen. When we transmute some part of our &'a [u8] data into a Foo, it must have type &'a Foo. Rust then ensures we can't go wrong.

You can see the code for this here;

Of course, transmuting is unsafe and we require unsafe blocks for it. But if we don't read the memory out of bounds, and we setup the lifetimes correctly (as described above), then using the resulting data structures is perfectly safe. This is a good example of how Rust helps isolate unsafe operations into explicit unsafe blocks.

There is a bit of friction with enums. Many data types in ELF ought to be representable as enums, they use an integer for each variant, but then they reserve a bunch integers for processor-specific use (or other uses). That means there is a range of values which are specified, but not as individual variants. There is no Rust type that matches these semantics and has the required layout in memory. Therefore we must use a two step process.

First, we use a newtype wrapping an integer. This is used as a field in a data structure, etc., so that when we take a view on memory it gets mapped to this newtype. That gives us some degree of type safety, but doesn't get us all the way there. Whenever we access these fields, we use a getter method, it converts the newtype to an enum. That enum has variants for each defined value, and also variants for each of the undefined ranges, these latter variants have a single field with the underlying value in. Because there is no way to indicate to Rust the ranges of these fields, Rust must use a discriminant in these enums (see below for more discussion), which means the in-memory representation is not the same as the underlying data (thus why we need the newtype). Using these enums should be safe and ergonomic. Unfortunately converting between the two representations requires big, ugly match expressions (at least they are internal to the library and in no way exposed to the user).

(There is a language-level solution for this - range types, I don't expect to see them in the near future though, if at all).

repr and other language features

One of the key features that makes the xmas-elf work is that we have guarantees about the layout of data structures in memory. This does not come by default. Unless the programmer signals otherwise, the compiler is free to optimise your data structures however it likes.

The way you specify the layout of a data structure is with repr attributes. The most common on structs is #[repr(C)], which tells the compiler to make the struct C-compatible. That is, the struct is laid out in the same way that a C compiler would. This basically means in the order that the struct is written, with padding so that fields respect there natural alignment. This is what we want for xmas-elf. In some cases, you want that layout without the padding, in which case you should use #[repr(packed)]. Note that you cannot take a reference to a field in a packed struct.

We also specify the representation of enums. For example, we use #[repr(u16)] to cause the compiler to represent the enum as an unsigned 16 bit integer. You can use any integer type for this. It only works with C-like enums. We also sometimes need to give some variants specific values, e.g., enum OsAbi { OpenBSD = 0x0C }.

Finally, we should be aware that if we implement Drop for a type, then the compiler adds a hidden field to that type called the drop flag. That should get removed from the language some day. But for now, if you implement Drop then you must take it into account when you transmute. We can't do that if we are transmuting slices of immutable binary data. Luckily, the compiler will at least warn us if a #[repr(C)] type would get a drop flag.

Slices and fat pointers

The ELF spec defines not just individual objects in the binary data, but sequences of objects. We would like to represent these as slices in Rust, and we can, but it takes a little more work than for individual objects.

We can't use fixed size arrays, because we don't know the length of these sequences statically (actually in one or two cases we do). So we must use slices which are variable length.

We start the same way as for individual objects, we have an index into the slice of binary data, and we transmute that into a pointer to the Rust type, in this case &'a [T], rather than &'a T. However, a reference to a slice is represented as a fat pointer containing the actual pointer to the data and the length of the slice. The result we have so far will have the length of the slice we transmuted (in bytes). Which is not the length we want (and is measured in the number of objects). So we have to set that length field. We do that via another transmute to the std::raw::Slice type, which is a struct which matches the compiler's representation of the slice pointer.

Code: parsing.rs:25:34

Strings

Reading a string is similar to reading a slice, except that we don't know the length in advance, we have to walk the string until we find a zero (since these are null-terminated strings). The ELF strings are ASCII strings, but since an ASCII string is bitwise-compatible with a UTF-8 (Rust) string, that aspect doesn't matter here.

Then we create the fat pointer to the string. A string is just a slice, so we can use std::raw::Slice again. This time, we make it from scratch using the pointer to the start of the string and the length we found earlier. Then we transmute the Slice to a &'a str (note that lifetime again).

Code: parsing.rs:38:49

32 and 64 bit versions

ELF comes in both 32 and 64 bit flavours. Most of the time, this just means that the size of some fields change. Sometimes the ordering changes too. In the latter case we just need 32 and 64 bit versions of the data structure. I use macros to reduce some code duplication. Where just the size changes, I use generic data structures where the actual type parameters are either P32 or P64 which just alias u32 and u64. I then need an enum for the two possibilities. For example, for SectionHeader:

pub enum SectionHeader<'a> {  
    Sh32(&'a SectionHeader_),
    Sh64(&'a SectionHeader_),
}

pub struct SectionHeader_

{ name: u32, type_: ShType_, flags: P, address: P, ... }

The impl for SectionHeader has getter methods for each field. There are a lot of macros for generating these getters, and for other code de-duplication.

I'm not particularly happy with this solution. There is no type-safety - we must be in either 32 or 64 bit mode, they can't be mixed, but this is not enforced by the types. And the code feels a bit crufty - it's neither elegant nor efficient.

One alternative is using a trait instead of an enum. This would skip some of the matching, but still needs a lot of boilerplate, and it doesn't address the safety question.

Going forward

This was just a toy mini-project, so I probably won't put much more effort into it. However, if anyone would find it useful, please let me know and I'll make some improvements.

Here's an incomplete list of things I'd like to work on:

  • Extract the parsing module into a crate - seems like it might be generally useful (in fact I'm using a copy and pasted version in another project already).
  • Flesh out some of the enums - I didn't add all possibilities, only the most common ones.
  • Documentation and testing - both sorely lacking.
  • Processor and OS extensions - ELF has lots of them and they are necessary to implement in order to be useful. I've only implemented the core spec so far.
  • Consistent naming - one of the hard bits of library design is naming. At the moment, naming is a bit inconsistent, it should be tidied up.
  • More newtypes - I use integer types in a lot of places where I should use newtype-wrapped integers.
  • What should be unsafe? - The functions in the parsing module are horribly unsafe. At the moment they are safe functions with unsafe bodies. However, they don't check their preconditions (and sometimes can't), and are vulnerable to causing memory safety errors if passed bad data. So, perhaps they should be unsafe functions. That would be much less ergonomic though. Not sure how to handle this question. (This is a relevant recent blog post).
  • The library is a bit panic-happy at the moment. I would prefer to return Result in more places.
  • See also, issue list.

Update

There were some interesting comments about this post on r/rust. Some of the improvements suggested:

  • Use iterators instead of allocating a Vec - I definitely want to do this!
  • Use slice::from_raw_parts rather than raw::Slice- I should do this too, it boils down to the same thing, but better to abstract what you can
  • Use CStr rather than handling strings manually - currently, this would walk the string twice vs just once in the manual approach. If that gets fixed, then using CStr would be better (abstracts away fiddly code and checks for UTF-8 compatibility.

Footnotes

  1. where we allocate

http://www.ncameron.org/blog/a-type-safe-and-zero-allocation-library-for-reading-and-navigating-elf-files/


Air Mozilla: Connected Devices: Connected Home Study

Среда, 13 Января 2016 г. 01:28 + в цитатник

Connected Devices: Connected Home Study This is part 1 of a market research project to analyze the Connected Home space and surface product opportunities. This presentation offers insight into: *...

https://air.mozilla.org/connected-home-study/


Yunier Jos'e Sosa V'azquez: Firefox OS ser'a el motor del nuevo Panasonic DX900 UHD TV

Вторник, 12 Января 2016 г. 19:24 + в цитатник

Firefox ser'a el coraz'on del pr'oximo televisor inteligente (smart tv) Ultra HD de Panasonic, as'i lo ha anunciado la compa~n'ia japonesa en el Consumer Electronics Show 2016 de Las Vegas. El modelo Panasonic DX900 contar'a con una ultra alta resoluci'on y ser'a el primer LED LCD TVs del mundo con estas caracter'isticas.

Pantalla principal de Firefox OS TV

Pantalla principal de Firefox OS TV

Los televisores Panasonic impulsados por Firefox OS se encuentran disponibles globalmente desde mayo de 2015 y con ellos los consumidores pueden encontrar sus canales favoritos, aplicaciones, videos, p'aginas web y contenido de forma r'apida, y anclar lo deseado a la pantalla principal de su TV.

Mozilla y Panasonic han estado colaborando desde 2014 para ofrecer a los consumidores experiencias de usuario optimizadas e intuitivas que les permitir'a disfrutar de los beneficios de la plataforma web abierta. Firefox OS es la primera plataforma verdaderamente abierta construida enteramente en tecnolog'ias web que ofrece m'as opciones y control a los usuarios, desarrolladores y fabricantes de hardware.

?Qu'e hay de nuevo en Firefox OS para los TV?

La versi'on m'as reciente de Firefox OS (2.5) est'a disponible para socios y desarrolladores con interesantes funcionalidades. Esta actualizaci'on estar'a disponible para el Panasonic DX900 UHD a finales de este a~no.

Esta actualizaci'on incluir'a nuevas v'ias para descubrir aplicaciones web y guardarlas en tu TV. Muchas aplicaciones principales como Vimeo, iHeartRadio, Atari, AOL, Giphy y Hubii est'an emocionados en colaborar con Mozilla para proveer aplicaciones optimizadas para los smart tv.

Aplicaciones Web disponibles para Firefox OS TV

Aplicaciones Web disponibles para Firefox OS TV

Esta actualizaci'on tambi'en permitir'a al Panasonic DX900 impulsado por Firefox OS con caracter'isticas para sincronizar Firefox en diferentes dispositivos, incluyendo un “enviar a la TV” para compartir f'acilmente el contenido web desde Firefox para Android a un televisor con Firefox OS.

Sincronizar Firefox OS Marcadores sincronizados en Firefox OS TV y otros dispositivos Enviando contenido del tel'efono al televisor Enviando una p'agina web del tel'efono al televisor

Firefox OS en los dispositivos conectados

Los dispositivos conectados est'an emergiendo r'apidamente a nuestro alrededor. A menudo nos referimos a esto como la Internet de las cosas (IoT), y de hecho est'a creando una red de recursos conectados, muy parecido a la Internet tradicional. Pero ahora tambi'en se conecta el mundo f'isico, creando la posibilidad de disfrutar y gestionar nuestro entorno de maneras nuevas e interesantes.

Para crear un entorno de dispositivo conectado sana, Mozilla cree que es fundamental para construir una alternativa abierta e independiente para plataformas propietarias. Estamos comprometidos a dar a las personas el control sobre sus vidas en l'inea y est'an explorando nuevos casos de uso en el mundo de los dispositivos conectados que traen mejores beneficios para el usuario.

Esperamos continuar nuestra asociaci'on con desarrolladores, fabricantes y una comunidad que comparte nuestros valores abiertos, ya que creemos que su apoyo y contribuciones son fundamentales para el 'exito. Trabajar con Panasonic para ofrecer televisores que funcionan con Firefox OS es una parte importante de nuestros esfuerzos.

Para m'as informaci'on, puedes visitar estas p'aginas:

Video demostrando el funcionamiento del Firefox OS Smart TV

Im'agenes del Firefox OS Smart TV

Nota de liberaci'on del PANASONIC DX900

Espero que te haya gustado la noticia y que pronto contemos con m'as dispositivos ejecutando Firefox OS.

Fuente: The Mozilla Blog

http://firefoxmania.uci.cu/firefox-os-sera-el-motor-del-nuevo-panasonic-dx900-uhd-tv/


Chris Ilias: Do you need to install Flash anymore?

Вторник, 12 Января 2016 г. 19:02 + в цитатник

If someone asks “Do I need Java“, my answer is a) most people don’t need it, and b) to find out if you need it, remove it. I did that many years ago and haven’t needed it. I’ve been hoping to reach the same point with Flash. I’d try disabling it, but there are two sites I regularly visit, which sometimes require Flash – Youtube and Facebook (for videos). Last year, Youtube switched to HTML5, and recently I found that Facebook started using HTML5 for videos, so I decided to try disabling Flash again. This time, I was pleasantly surprised at how many websites no longer use Flash.

Using Firefox on a late 2013 Macbook Pro, here is a list of sites I’ve found work well with Flash disabled:

  • Youtube
  • Facebook
  • Thestar.com
  • Instagram
  • CNN
  • CNET
  • Vimeo
  • Dailymotion

There are still some holdouts. In my case, I’m really affected by CTV Toronto News requiring Flash. I also wanted to watch an episode of Comedians In Cars Getting Coffee, and that required Flash. Others:

  • BBC
  • Hulu

I emailed CTV, and here’s the response:
At this time we currently do not have any future plans to support HTML5. Regardless, your comments have been forwarded to our technical team for review.

I’ve decided to switch back to thestar.com for local [Toronto] news, now that they’re over their Rob Ford obsession.

And with that, I can keep Flash disabled. Every now and then I may require it to view some web content, but for the most part, I don’t need it.
Flash has been thought of as a must-have plugin, but after disabling it, that wasn’t the case for me. A lot of the web has already switched to HTML5. Try disabling Flash for yourself, and enjoy so much more battery life!

http://ilias.ca/blog/2016/01/do-you-need-to-install-flash-anymore/


Chris H-C: C++: Today I Learned How to Use dtor Order to Detect Temporaries

Вторник, 12 Января 2016 г. 00:12 + в цитатник

On Friday I wrote the kind of C++ bug you usually write on Fridays: a stupid one. I was trying to create an object that would live exactly until the end of the block using RAII (as you do). This is how I wrote it:

RAII(parameters);

What I should have written was:

RAII someRAII(parameters);

Because, according to 12.2/3 of the ISO C++ standard, the temporary object created by the RAII construction in the first case will only last until the end of its containing expression.Whereas in the second case the temporary is assigned a reference `someRAII` and its lifetime is thus lengthened to the lifetime of the reference.

As I had it written, the RAII would last until the semicolon. Which isn’t very long at all for something I was supposed to be using to mark a stack frame’s duration.

There should be a law against this! I thought. Why does the compiler even have that lever?

Or, more seriously, how can I stop this from happening to me again? Or to others?

This being Gecko I’m hacking on, :froydnj informed me that, indeed, there are two different ways of catching this blunder. Both happen to be documented on the same page of the MDN about how to use RAII classes in Mozilla.

The first way is adding custom type annotations to mark the class as non-temporary, then having a clang plugin throw during static analysis if any scope has a “non-temporary”-marked class being allocated as a temporary.

(( #include “mfbt/Annotations.h” and add MOZ_RAII to your class decl to use it.))

That only works if you’re on a platform that supports clang and have static analysis turned on. This wouldn’t help me, as I’m developing Gecko on Windows (Why? A post for another time).

This brings us to the second, cross-platform way which is unfortunately only a runtime error. As a runtime error it incurs a runtime cost in CPU and memory (so it’s only compiled on debug builds) and it requires that the code actually run for the test to fail (which means you might still miss it).

This second way is a triplet of macros that annotates an RAII class to have the superpower to detect, at runtime, whether or not an instance of that class being destructed was allocated as a temporary or not.

Sound like magic? It all comes down to what order destructors are called. Take two classes, A and B such that A’s ctor takes a temporary B as a default arg:

A(B b = B())

Try allocating an instance of A on the stack and watch what order the ctors/dtors are called:

A a;
B()
A()
~B()
~A()

Allocate an A as a temporary and you get something different:

A();
B()
A()
~A()
~B()

(Here’s a full example at ideone.com so you can run and tweak it yourself)

They both start the same: you need to create a B instance to create an A instance, so b’s ctor goes first, then a’s.

b is a temporary, so what happens next is up to section 12.2 of the standard again. It lasts until the semicolon of the call to the A ctor. So in the stack case, we hit the semicolon first (~B()) then the stack frame holding a is popped (~A()).

When a is also a temporary, it gets interesting. Who goes first when there are two temporaries that need to be destructed at the same semicolon? Back to 12.2, this time to footnote 8 where it says that we go in reverse order of construction. So, since we call B() then A(), when we hit the semicolon we roll it back as ~A() then ~B().

So if ~A() happens before ~B(), then you were a temporary. If not, you weren’t. If you tell A when B is destructed, when A goes away it’ll know if it had been allocated as a temporary or not.

And, of course, this is exactly how those macros grant superpowers:

  1. MOZ_GUARD_OBJECT_PARAM puts the temporary B b = B() in your “A” class’ ctor args,
  2. MOZ_DECL_USE_GUARD_OBJECT_NOTIFIER puts a little storage on your “A” for B to use to notify your “A” when it’s been destructed.
  3. MOZ_GUARD_OBJECT_INIT tells B how to find your “A”

(( It’s all in GuardObjects.h ))

It takes what is a gotcha moment (wait, the destruction order is different when you allocate as a temporary?!) and turns it into a runtime test.

Ah, C++.

:chutten


https://chuttenblog.wordpress.com/2016/01/11/c-today-i-learned-how-to-use-dtor-order-to-detect-temporaries/


Aaron Klotz: Bugs From Hell: Injected Third-party Code + Detours = a Bad Time

Понедельник, 11 Января 2016 г. 23:00 + в цитатник

Happy New Year!

I’m finally getting ‘round to writing about a nasty bug that I had to spend a bunch of time with in Q4 2015. It’s one of the more challenging problems that I’ve had to break and I’ve been asked a lot of questions about it. I’m talking about bug 1218473.

How This All Began

In bug 1213567 I had landed a patch to intercept calls to CreateWindowEx. This was necessary because it was apparent in that bug that window subclassing was occurring while a window was neutered (“neutering” is terminology that is specific to Mozilla’s Win32 IPC code).

While I’ll save a discussion on the specifics of window neutering for another day, for our purposes it is sufficient for me to point out that subclassing a neutered window is bad because it creates an infinite recursion scenario with window procedures that will eventually overflow the stack.

Neutering is triggered during certain types of IPC calls as soon as a message is sent to an unneutered window on the thread making the IPC call. Unfortunately in the case of bug 1213567, the message triggering the neutering was WM_CREATE. Shortly after creating that window, the code responsible would subclass said window. Since WM_CREATE had already triggered neutering, this would result in the pathological case that triggers the stack overflow.

For a fix, what I wanted to do is to prevent messages that were sent immediately during the execution of CreateWindow (such as WM_CREATE) from triggering neutering prematurely. By intercepting calls to CreateWindowEx, I could wrap those calls with a RAII object that temporarily suppresses the neutering. Since the subclassing occurs immediately after window creation, this meant that this subclassing operation was now safe.

Unfortunately, shortly after landing bug 1213567, bug 1218473 was filed.

Where to Start

It wasn’t obvious where to start debugging this. While a crash spike was clearly correlated with the landing of bug 1213567, the crashes were occurring in code that had nothing to do with IPC or Win32. For example, the first stack that I looked at was js::CreateRegExpMatchResult!

When it is just not clear where to begin, I like to start by looking at our correlation data in Socorro – you’d be surprised how often they can bring problems into sharp relief!

In this case, the correlation data didn’t disappoint: there was 100% correlation with a module called _etoured.dll. There was also correlation with the presence of both NVIDIA video drivers and Intel video drivers. Clearly this was a concern only when NVIDIA Optimus technology was enabled.

I also had a pretty strong hypothesis about what _etoured.dll was: For many years, Microsoft Research has shipped a package called Detours. Detours is a library that is used for intercepting Win32 API calls. While the changelog for Detours 3.0 points out that it has “Removed [the] requirement for including detoured.dll in processes,” in previous versions of the package, this library was required to be injected into the target process.

I concluded that _etoured.dll was most likely a renamed version of detoured.dll from Detours 2.x.

Following The Trail

Now that I knew the likely culprit, I needed to know how it was getting there. During a November trip to the Mozilla Toronto office, I spent some time debugging a test laptop that was configured with Optimus.

Knowing that the presence of Detours was somehow interfering with our own API interception, I decided to find out whether it was also trying to intercept CreateWindowExW. I launched windbg, started Firefox with it, and then told it to break as soon as user32.dll was loaded:


sxe ld:user32.dll

Then I pressed F5 to resume execution. When the debugger broke again, this time user32 was now in memory. I wanted the debugger to break as soon as CreateWindowExW was touched:


ba w 4 user32!CreateWindowExW

Once again I resumed execution. Then the debugger broke on the memory access and gave me this call stack:


nvd3d9wrap!setDeviceHandle+0x1c91
nvd3d9wrap!initialise+0x373
nvd3d9wrap!setDeviceHandle+0x467b
nvd3d9wrap!setDeviceHandle+0x4602
ntdll!LdrpCallInitRoutine+0x14
ntdll!LdrpRunInitializeRoutines+0x26f
ntdll!LdrpLoadDll+0x453
ntdll!LdrLoadDll+0xaa
mozglue!`anonymous namespace'::patched_LdrLoadDll+0x1b0
KERNELBASE!LoadLibraryExW+0x1f7
KERNELBASE!LoadLibraryExA+0x26
kernel32!LoadLibraryA+0xba
nvinit+0x11cb
nvinit+0x5477
nvinit!nvCoprocThunk+0x6e94
nvinit!nvCoprocThunk+0x6e1a
ntdll!LdrpCallInitRoutine+0x14
ntdll!LdrpRunInitializeRoutines+0x26f
ntdll!LdrpLoadDll+0x453
ntdll!LdrLoadDll+0xaa
mozglue!`anonymous namespace'::patched_LdrLoadDll+0x1b0
KERNELBASE!LoadLibraryExW+0x1f7
kernel32!BasepLoadAppInitDlls+0x167
kernel32!LoadAppInitDlls+0x82
USER32!ClientThreadSetup+0x1f9
USER32!__ClientThreadSetup+0x5
ntdll!KiUserCallbackDispatcher+0x2e
GDI32!GdiDllInitialize+0x1c
USER32!_UserClientDllInitialize+0x32f
ntdll!LdrpCallInitRoutine+0x14
ntdll!LdrpRunInitializeRoutines+0x26f
ntdll!LdrpLoadDll+0x453
ntdll!LdrLoadDll+0xaa
mozglue!`anonymous namespace'::patched_LdrLoadDll+0x1b0
KERNELBASE!LoadLibraryExW+0x1f7
firefox!XPCOMGlueLoad+0x23c
firefox!XPCOMGlueStartup+0x1d
firefox!InitXPCOMGlue+0xba
firefox!NS_internal_main+0x5c
firefox!wmain+0xbe
firefox!__tmainCRTStartup+0xfe
kernel32!BaseThreadInitThunk+0xe
ntdll!__RtlUserThreadStart+0x70
ntdll!_RtlUserThreadStart+0x1b

This stack is a gold mine of information. In particular, it tells us the following:

  1. The offending DLLs are being injected by AppInit_DLLs (and in fact, Raymond Chen has blogged about this exact case in the past).

  2. nvinit.dll is the name of the DLL that is injected by step 1.

  3. nvinit.dll loads nvd3d9wrap.dll which then uses Detours to patch our copy of CreateWindowExW.

I then became curious as to which other functions they were patching.

Since Detours is patching executable code, we know that at some point it is going to need to call VirtualProtect to make the target code writable. In the worst case, VirtualProtect’s caller is going to pass the address of the page where the target code resides. In the best case, the caller will pass in the address of the target function itself!

I restarted windbg, but this time I set a breakpoint on VirtualProtect:


bp kernel32!VirtualProtect

I then resumed the debugger and examined the call stack every time it broke. While not every single VirtualProtect call would correspond to a detour, it would be obvious when it was, as the NVIDIA DLLs would be on the call stack.

The first time I caught a detour, I examined the address being passed to VirtualProtect: I ended up with the best possible case: the address was pointing to the actual target function! From there I was able to distill a list of other functions being hooked by the injected NVIDIA DLLs.

Putting it all Together

By this point I knew who was hooking our code and knew how it was getting there. I also noticed that CreateWindowEx is the only function that the NVIDIA DLLs and our own code were both trying to intercept. Clearly there was some kind of bad interaction occurring between the two interception mechanisms, but what was it?

I decided to go back and examine a specific crash dump. In particular, I wanted to examine three different memory locations:

  1. The first few instructions of user32!CreateWindowExW;
  2. The first few instructions of xul!CreateWindowExWHook; and
  3. The site of the call to user32!CreateWindowExW that triggered the crash.

Of those three locations, the only one that looked off was location 2:


6b1f6611 56              push    esi
6b1f6612 ff15f033e975    call    dword ptr [combase!CClassCache::CLSvrClassEntry::GetDDEInfo+0x41 (75e933f0)]
6b1f6618 c3              ret
6b1f6619 7106            jno     xul!`anonymous namespace'::CreateWindowExWHook+0x6 (6b1f6621)
xul!`anonymous namespace'::CreateWindowExWHook:
6b1f661b cc              int     3
6b1f661c cc              int     3
6b1f661d cc              int     3
6b1f661e cc              int     3
6b1f661f cc              int     3
6b1f6620 cc              int     3
6b1f6621 ff              ???

Why the hell were the first six bytes filled with breakpoint instructions?

I decided at this point to look at some source code. Fortunately Microsoft publishes the 32-bit source code for Detours, licensed for non-profit use, under the name “Detours Express.”

I found a copy of Detours Express 2.1 and checked out the code. First I wanted to know where all of these 0xcc bytes were coming from. A quick grep turned up what I was looking for:

detours.cpp
93
94
95
96
97
98
99
inline PBYTE detour_gen_brk(PBYTE pbCode, PBYTE pbLimit)
{
    while (pbCode < pbLimit) {
        *pbCode++ = 0xcc;   // brk;
    }
    return pbCode;
}

Now that I knew which function was generating the int 3 instructions, I then wanted to find its callers. Soon I found:

detours.cpp
1247
1248
1249
1250
1251
#ifdef DETOURS_X86
    pbSrc = detour_gen_jmp_immediate(pTrampoline->rbCode + cbTarget, pTrampoline->pbRemain);
    pbSrc = detour_gen_brk(pbSrc,
                           pTrampoline->rbCode + sizeof(pTrampoline->rbCode));
#endif // DETOURS_X86

Okay, so Detours writes the breakpoints out immediately after it has written a jmp pointing to its trampoline.

Why is our hook function being trampolined?

The reason must be because our hook was installed first! Detours has detected that and has decided that the best place to trampoline to the NVIDIA hook is at the beginning of our hook function.

But Detours is using the wrong address!

We can see that because the int 3 instructions are written out at the beginning of CreateWindowExWHook, even though there should be a jmp instruction first.

Detours is calculating the wrong address to write its jmp!

Finding a Workaround

Once I knew what the problem was, I needed to know more about the why – only then would I be able to come up with a way to work around this problem.

I decided to reconstruct the scenario where both our code and Detours are trying to hook the same function, but our hook was installed first. I would then follow along through the Detours code to determine how it calculated the wrong address to install its jmp.

The first thing to keep in mind is that Mozilla’s function interception code takes advantage of hot-patch points in Windows. If the target function begins with a mov edi, edi prolog, we use a hot-patch style hook instead of a trampoline hook. I am not going to go into detail about hot-patch hooks here – the above Raymond Chen link contains enough details to answer your questions. For the purposes of this blog post, the important point is that Mozilla’s code patches the mov edi, edi, so NVIDIA’s Detours library would need to recognize and follow the jmps that our code patched in, in order to write its own jmp at CreateWindowExWHook.

Tracing through the Detours code, I found the place where it checks for a hot-patch hook and follows the jmp if necessary. While examining a function called detour_skip_jmp, I found the bug:

detours.cpp
124
        pbNew = pbCode + *(INT32 *)&pbCode[1];

This code is supposed to be telling Detours where the target address of a jmp is, so that Detours can follow it. pbNew is supposed to be the target address of the jmp. pbCode is referencing the address of the beginning of the jmp instruction itself. Unfortunately, with this type of jmp instruction, target addresses are always relative to the address of the next instruction, not the current instruction! Since the current jmp instruction is five bytes long, Detours ends up writing its jmp five bytes prior to the intended target address!

I went and checked the source code for Detours Express 3.0 to see if this had been fixed, and indeed it had:

detours.cpp
163
        PBYTE pbNew = pbCode + 5 + *(INT32 *)&pbCode[1];

That doesn’t do much for me right now, however, since the NVIDIA stuff is still using Detours 2.x.

In the case of Mozilla’s code, there is legitimate executable code at that incorrect address that Detours writes to. It is corrupting the last few instructions of that function, thus explaining those mysterious crashes that were seemingly unrelated code.

I confirmed this by downloading the binaries from the build that was associated with the crash dump that I was analyzing. [As an aside, I should point out that you need to grab the identical binaries for this exercise; you cannot build from the same source revision and expect this to work due to variability that is introduced into builds by things like PGO.]

The five bytes preceeding CreateWindowExHookW in the crash dump diverged from those same bytes in the original binaries. I could also make out that the overwritten bytes consisted of a jmp instruction.

In Summary

Let us now review what we know at this point:

  • Detours 2.x doesn’t correctly follow jmps from hot-patch hooks;
  • If Detours tries to hook something that has already been hot-patched (including legitimate hot patches from Microsoft), it will write bytes at incorrect addresses;
  • NVIDIA Optimus injects this buggy code into everybody’s address spaces via an AppInit_DLLs entry for nvinit.dll.

How can we best distill this into a suitable workaround?

One option could be to block the NVIDIA DLLs outright. In most cases this would probably be the simplest option, but I was hesitant to do so this time. I was concerned about the unintended consequences of blocking what, for better or worse, is a user-mode component of NVIDIA video drivers.

Instead I decided to take advantage of the fact that we now know how this bug is triggered. I have modified our API interception code such that if it detects the presence of NVIDIA Optimus, it disables hot-patch style hooks.

Not only will this take care of the crash spike that started when I landed bug 1213567, I also expect it to take care of other crash signatures whose relationship to this bug was not obvious.

That concludes this episode of Bugs from Hell. Until next time…

http://dblohm7.ca/blog/2016/01/11/bugs-from-hell-injected-third-party-code-plus-detours-equals-a-bad-time/


Air Mozilla: Mozilla Weekly Project Meeting, 11 Jan 2016

Понедельник, 11 Января 2016 г. 22:00 + в цитатник

Mozilla WebDev Community: Extravaganza – January 2016

Понедельник, 11 Января 2016 г. 21:58 + в цитатник

Once a month, web developers from across Mozilla get together to talk about the work that we’ve shipped, share the libraries we’re working on, meet new folks, and talk about whatever else is on our minds. It’s the Webdev Extravaganza! The meeting is open to the public; you should stop by!

You can check out the wiki page that we use to organize the meeting, or view a recording of the meeting in Air Mozilla. Or just read on for a summary!

Shipping Celebration

The shipping celebration is for anything we finished and deployed in the past month, whether it be a brand new site, an upgrade to an existing one, or even a release of a library.

Verbatim is Dead! Long Live Pontoon!

First up was Osmose (that’s me!), sharing the exciting news that Verbatim, otherwise known as localize.mozilla.org, has been decommissioned and replaced by Pontoon! Verbatim was an extremely out-of-date instance of the Pootle translation software. Mozilla websites that wish to translate their content are now encouraged to contact the Pontoon team when they want to enable the L10n community to translate their site.

Mozilla.org Dual SHA1/SHA256 Certificate Negotiation

Next was jgmize with info about www.mozilla.org‘s recent work around enabling both SHA1 and SHA256 certificates to be used on the site. Firefox is phasing out support for SHA1 certificates along with the other major browsers, but users on older browsers need to be able to download new versions of Firefox from www.mozilla.org. Some of these older versions are from before browsers supported SHA256 certificates. In order to avoid leaving these users stuck without a way to get a modern browser, www.mozilla.org needs to be able to fall back to SHA1 certificates when necessary.

Happily, www.mozilla.org is now correctly using a SHA256 certificate and falling back to a SHA1 certificate for users whose browser does not support SHA256 certificates, thanks to our CloudFlare CDN.

Peep Now Compatible with Pip 7

ErikRose shared the welcome news that Peep, the wrapper for Pip that supports hash verification of downloaded dependencies, now supports Pip 7 correctly! He also reminded us that pip 8 will obsolete Peep as it will have hash verification built-in.

Web App Developer Initiative Sites

Next was bwalker with a list of websites and tools that the Web App Developer Initiative team shipped in the last quarter:

DXR Updates

ErikRose shared a slew of DXR updates:

  • Recognizing overrides of virtual methods
  • Recognizing multiple directly overridden virtual methods
  • The ability to index Cargo packages
  • Not offering sub/superclass search when none exist

Thanks to Tom Klein and Nick Cameron for these updates!

Open-source Citizenship

Here we talk about libraries we’re maintaining and what, if anything, we need help with for them.

Product-Details Supports Python 2 and 3 and Django 1.9

Pmac shared the news that django-mozilla-product-details now supports Python 2 and 3 simultaneously, as well as supporting Django 1.9. He also shared the slightly-older news that the library supports optionally storing the data in the database instead of the filesystem.

New Hires / Interns / Volunteers / Contributors

Here we introduce any newcomers to the Webdev group, including new employees, interns, volunteers, or any other form of contributor.

  • jpetto is switching from a contractor to a full-time web developer on the Engagement Web Development team!
  • davidwalsh is moving to the Web App Developer Initiative team!
  • Osmose is moving to the Support Engineering team working on Input, SUMO, and other projects!

Roundtable

The Roundtable is the home for discussions that don’t fit anywhere else.

Docker Practices for Development

ErikRose closed us out with a few tips for using Docker for developing a website:


If you’re interested in web development at Mozilla, or want to attend next month’s Extravaganza, subscribe to the dev-webdev@lists.mozilla.org mailing list to be notified of the next meeting, and maybe send a message introducing yourself. We’d love to meet you!

See you next month!

https://blog.mozilla.org/webdev/2016/01/11/extravaganza-january-2016/


Myk Melez: URL Has Been Changed

Понедельник, 11 Января 2016 г. 19:57 + в цитатник
The URL you have reached, http://mykzilla.blogspot.com/, has been changed. The new URL is https://mykzilla.org/. Please make a note of it.

http://mykzilla.blogspot.com/2016/01/url-has-been-changed.html


Byron Jones: happy bmo push day!

Понедельник, 11 Января 2016 г. 18:13 + в цитатник

the following changes have been pushed to bugzilla.mozilla.org:

  • [1233878] tracking flags don’t show up in the view of the bug right after filing
  • [1224001] Add push connector for Aha.io
  • [1237188] add support for servicenow to the ‘see also’ field
  • [1236955] [form.mdn] Please add component drop-down to custom form
  • [1232913] The attachment links don’t look like links
  • [1237185] hide ‘cab review’ custom field behind a “click through” to direct people to servicenow

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

https://globau.wordpress.com/2016/01/11/happy-bmo-push-day-165/


Julien Pag`es: mozregression – Engineering Productivity Project of the Month

Понедельник, 11 Января 2016 г. 17:47 + в цитатник

Hello from Engineering Productivity! Once a month we highlight one of our projects to help the Mozilla community discover a useful tool or an interesting contribution opportunity.

This month’s project is mozregression!

Why is mozregression useful ?

mozregression helps to find regressions in Mozilla projects like Firefox or Firefox on Android. It downloads and runs the builds between two dates (or changesets) known to be good and bad, and lets you test each build to finally find by bisection the smallest possible range of changesets where the regression appears.

It does not build locally the application under test, instead, it uses pre-built files, making it fast and easy for everyone to look for the origin of a regression.

Examples:

# Search a Firefox regression in mozilla-central starting from 2016-01-01

mozregression -g 2016-01-01

# Firefox regression, on mozilla-aurora from 2015-09-01 to 2015-10-01

mozregression --repo aurora -g 2015-09-01 -b 2015-10-01

# Look for a regression in fennec (firefox for android)

mozregression --app fennec -g 2015-09-01 -b 2015-10-01

# regression on firefox in inbound, using debug builds and starting from known changesets

mozregression -b 6f4da397ac3c -g eb4dc9b5a928 -B debug --repo inbound

Note that a graphical interface exists also.

Where do I find mozregression ?

Start with:

What are we working on ?

Currently mozregression is improving in multiple areas, among them:

Contributions

William Lachance (:wlach) and myself (:parkouss) are the current maintainers of mozregression.

We welcome contributors! Mike Ling is helping the project for quite some time now, adding useful features and fixing various bugs – he’s currently working on providing ready to use binaries for Mac OS X. A big thanks Mike Ling for your contributions!

Also thanks to Saurabh Singhal and Wasif Hider, who are recent contributors on the graphical user interface.

If you want to contribute as a developer or help us on the documentation, please say hi on the #ateam irc channel!

Reporting bugs / new ideas

You can also help a lot by reporting bugs or new ideas! Please file bugs on bugzilla with the mozregression component:

https://bugzilla.mozilla.org/enter_bug.cgi?product=Testing&component=mozregression

mozregression’s bug list:

https://bugzilla.mozilla.org/buglist.cgi?component=mozregression&bug_status=__open__&product=Testing


For more information about all Engineering Productivity projects visit our wiki. If you’re interested in helping out, the A-Team bootcamp has resources for getting started.


https://parkouss.wordpress.com/2016/01/11/mozregression-engineering-productivity-project-of-the-month/


Gregory Szorc: Investing in the Firefox Build System in 2016

Понедельник, 11 Января 2016 г. 17:20 + в цитатник

Most of Mozilla gathered in Orlando in December for an all hands meeting. If you attended any of the plenary sessions, you probably heard people like David Bryant and Lawrence Mandel make references to improving the Firefox build system and related tools. Well, the cat is out of the bag: Mozilla will be investing heavily in the Firefox build system and related tooling in 2016!

In the past 4+ years, the Firefox build system has mostly been held together and incrementally improved by a loose coalition of people who cared. We had a period in 2013 where a number of people were making significant updates (this is when moz.build files happened). But for the past 1.5+ years, there hasn't really been a coordinated effort to improve the build system - just a lot of one-off tasks and (much-appreciated) individual heroics. This is changing.

Improving the build system is a high priority for Mozilla in 2016. And investment has already begun. In Orlando, we had a marathon 3 hour meeting planning work for Q1. At least 8 people have committed to projects in Q1.

The focus of work is split between immediate short-term wins and longer-term investments. We also decided to prioritize the Firefox and Fennec developer workflows (over Gecko/Platform) as well as the development experience on Windows. This is because these areas have been under-loved and therefore have more potential for impact.

Here are the projects we're focusing on in Q1:

  • Turnkey artifact based builds for Firefox and Fennec (download pre-built binaries so you don't have to spend 20 minutes compiling C++)
  • Running tests from the source directory (so you don't have to copy tens of thousands of files to the object directory)
  • Speed up configure / prototype a replacement
  • Telemetry for mach and the build system
  • NSPR, NSS, and (maybe) ICU build system rewrites
  • mach build faster improvements
  • Improvements to build rules used for building binaries (enables non-make build backends)
  • mach command for analyzing C++ dependencies
  • Deploy automated testing for mach bootstrap on TaskCluster

Work has already started on a number of these projects. I'm optimistic 2016 will be a watershed year for the Firefox build system and the improvements will have a drastic positive impact on developer productivity.

http://gregoryszorc.com/blog/2016/01/11/investing-in-the-firefox-build-system-in-2016


QMO: Firefox 44.0 Beta 7 Testday Results

Понедельник, 11 Января 2016 г. 17:17 + в цитатник

Greetings Mozilla friends!

The last Friday, January 8th, we held Firefox 44.0 Beta 7 Testday and it was another successful event!

Many thanks go out to Bolaram Paul, Iryna Thompson, Rony De Le'on (ronyyC2), Ilse Mac'ias, Jayesh KR, gaby2300, Timolawl, PreethiDhinesh and  to our awesome Bangladesh Community: Hossain Al Ikram, Mohammad Maruf Islam, Asiful Kabir Heemel, Nazir Ahmed Sabbir, Amlan Biswas, Ashickur Rahman, Sajedul Islam, Kazi Sakib Ahmad, Rezaul Huque Nayeem, Khalid Syfullah Zaman, Tahsan Chowdhury Akash, Mohammed Jawad Ibne Ishaque, Md. Almas Hossain, Md. Rahimul Islam, Md. Arifur Rahman, Anika Nawar, Fazle Rabbi, Saddam Hossain, Fahmida Noor, Ashraful Islam Chowdhury, Shaily Roy, Sayed Ibn Masud, Forhad Hossain, Md. Ehsanul Hassan, Kazi Nuzhat Tasnem, Md. Faysal Alam Riyad, Mubina Rahaman Jerin, Mohammad Kamran Hossain, Aanhiat, Md. Badiuzzaman Pranto, T.M. Sazzad Hossain, Sashoto Seeam, MD. Nazmus Shakib (Robin), Md.Tarikul Islam Oashi, Asif Mahmud Shuvo, Karimun Nahar Nourin and Shahadat Hossain for getting involved – your help is always greatly appreciated!

Also a big thank you goes to all our active moderators.

Results:

Keep an eye on QMO for upcoming events! 

https://quality.mozilla.org/2016/01/firefox-44-0-beta-7-testday-results/


Daniel Stenberg: Tales from my inbox, part++

Понедельник, 11 Января 2016 г. 16:30 + в цитатник

“Josh” sent me an email. Pardon the language here but I decided to show the mail body unaltered:

From: Josh Yanez 
Date: Wed, 6 Jan 2016 22:27:13 -0800
To: daniel
Subject: Hey fucker

I got all your fucking info either you turn yourself in or ill show it to the police. You think I'm playing try me I got all your stupid little coding too.

Sent from my iPhone

This generates so many questions

  1. I’ve had threats mailed to be before (even done over phone) so this is far from the first time. The few times I’ve bothered to actually try to understand what these people are hallucinating about, it usually turns out that they’ve discovered that someone has hacked them or targeted them in some sort of attack and curl was used and I am the main author so I’m the bad guy.
  2. He has all my “info” and my “stupid little coding too” ? What “coding” could that be? What is all my info?
  3. Is this just a spam somehow that wants me to reply? It is directed to me only and I’ve not heard of anyone else who got a mail similar to this.
  4. The lovely “Sent from my iPhone” signature is sort of hilarious too after such an offensive message.

Very aware this could just as well suck me into a deep and dark hole of sadness, I was just too curious to resist so I responded. Unfortunately I didn’t get anything further back so the story thus ends here, a bit abrupt. :-(

http://daniel.haxx.se/blog/2016/01/11/tales-from-my-inbox-part/



Поиск сообщений в rss_planet_mozilla
Страницы: 472 ... 231 230 [229] 228 227 ..
.. 1 Календарь