-Поиск по дневнику

Поиск сообщений в rss_planet_mozilla

 -Подписка по e-mail

 

 -Постоянные читатели

 -Статистика

Статистика LiveInternet.ru: показано количество хитов и посетителей
Создан: 19.06.2007
Записей:
Комментариев:
Написано: 7

Planet Mozilla





Planet Mozilla - https://planet.mozilla.org/


Добавить любой RSS - источник (включая журнал LiveJournal) в свою ленту друзей вы можете на странице синдикации.

Исходная информация - http://planet.mozilla.org/.
Данный дневник сформирован из открытого RSS-источника по адресу http://planet.mozilla.org/rss20.xml, и дополняется в соответствии с дополнением данного источника. Он может не соответствовать содержимому оригинальной страницы. Трансляция создана автоматически по запросу читателей этой RSS ленты.
По всем вопросам о работе данного сервиса обращаться со страницы контактной информации.

[Обновить трансляцию]

Daniel Pocock: Comparison of free, open source accounting software

Среда, 09 Декабря 2015 г. 00:17 + в цитатник

There are a diverse range of free software solutions for accounting.

Personally, I have been tracking my personal and business accounts using a double-entry accounting system since I started doing freelance work about the same time I started university. Once you become familiar with double-entry accounting (which doesn't require much more than basic arithmetic skills and remembering the distinction between a debit and a credit) it is unlikely you would ever want to go back to a spreadsheet.

Accounting software promoted for personal/home users often provides a very basic ledger where you can distinguish how much cash goes to rent, how much to food and how much to the tax man. Software promoted for business goes beyond the core ledger functionality and provides helpful ways to keep track of which bills you already paid, which are due imminently and which customers haven't paid you. Even for a one-man-band, freelancer or contractor, using a solution like this is hugely more productive than trying to track bills in a spreadsheet.

Factors to consider when choosing a solution

Changing accounting software can be a time consuming process and require all the users to learn a lot of new things. Therefore, it is generally recommended to start with something a little more powerful than what you need in the hope that you will be able to stick with it for a long time. With proprietary software this can be difficult because the more advanced solutions cost more money than you might be willing to pay right now. With free software, there is no such limitation and you can start with an enterprise-grade solution from day one and just turn off or ignore the features you don't need yet.

If you are working as an IT consultant or freelancer and advising other businesses then it is also worthwhile to choose a solution for yourself that you can potentially recommend to your clients and customize for them.

The comparison

Here is a quick comparison of some of the free software accounting solutions that are packaged on popular Linux distributions like Debian, Ubuntu and Fedora:

Product Postbooks GnuCash LedgerSMB HomeBank Skrooge KMyMoney BG Financas Grisbi
GUI Y Y N Y Y Y Y Y
Web UI Y N Y N N N N N
Multi-user Y N Y N N N N Y
File storage N Y N Y Y Y N N
SQL storage Y Y Y N N Y Y Y
Multi-currency Y Y Y N Y Y Y
A/R Y Y Y N Y Y Y
A/P Y Y Y N Y Y Y
VAT/GST Y Y Y N N Y Y
Inventory Y N Y N N N

The table doesn't consider Odoo (formerly OpenERP) because the packages were considered buggy and are not maintained any more. Compiere and Adempiere are other well known solutions but they haven't been packaged at all.

Features in detail

While the above list gives a basic summary of features, it is necessary to look more closely at how they are implemented.

For example, if you need to report on VAT or GST, there are two methods of reporting: cash or accrual. Some products only support accruals because that is easier to implement. Even in commercial products that support cash-based VAT reporting, the reports are not always accurate (I've seen that problem with the proprietary Quickbooks software) and a tax auditor will be quick to spot such errors.

The only real way to get to know one of these products is to test it for a couple of hours. Postbooks, for example, provides the Demo database so you can test it with dummy data without making any real commitment.

User interface choices

If you need to support users on multiple platforms or remote users such as an accountant or book-keeper, it is tempting to choose a solution with a web interface. The solutions with desktop interfaces can be provisioned to remote users using a terminal-server setup.

The full GUI solutions tend to offer a richer user interface and reporting experience. It can frequently be useful to have multiple windows or reports open at the same time, doing this with browser tabs can be painful.

File or database storage

There are many good reasons to use database storage and my personal preference is for PostgreSQL.

Using a database allows you to run a variety of third-party reporting tools and write your own scripts for data import and migration.

Community and commercial support

When dealing with business software, it is important to look at both the community and the commercial support offerings that are available.

Some communities have events, such as xTupleCon for Postbooks or a presence at other major events like FOSDEM.

Summary

My personal choice at the moment is Postbooks from xTuple. This is because of a range of factors, including the availability of both web and desktop clients, true multi-user support, the multi-currency support and the PostgreSQL back-end.

http://danielpocock.com/comparison-of-free-open-source-accounting-software


Just Browsing: The Human Brain is a Vital Front-End Component

Вторник, 08 Декабря 2015 г. 18:57 + в цитатник
The Human Brain is a Vital Front-End Component

The Human Brain is a Vital Front-End Component

It was around 2011 and I was working for a small company with the ambitious goal of building the world's sexiest cloud CRM product. To build a great product, you need great technologies. Fortunately, one of my colleagues was a tech maven who tried hard to stay on top of the latest trends in software development.

One day he came back from London, having attended a course by some guy called Greg Young. The course was about Command Query Responsibility Segregation (CQRS), Domain-Driven Design (DDD) and Event Sourcing (ES) . A detailed description of these approaches would make this already long post far longer, but I strongly urge you to follow these links to find out more.

The goal of these complementary approaches is to make hard choices about software architecture easier by establishing strict patterns about where to put your code. CQRS says essentially that we should have a regimented flow of data through the application, with writing data ("Commands") strictly separated from reading data ("Queries"). According to ES, all changes to application state are triggered by a stream of "Events" that can be logged, rolled back and replayed.

Anyway, my colleague came back from London full of excitement. I'm not sure he fully grasped all the details of Greg Young's work, but the revolutionary potential of his approach was clear to both of us. This was in the early days of "full-stack development", and I was lucky enough to be working on both the back-end and front-end. But while Greg was targeting his ideas at complex server-side code written in Java or Scala, it seemed to me that they should be applicable on the front-end tier as well.

Unfortunately, at the time ExtJS was just about state-of-the-art for front-end development. (Boy am I glad those days are gone.) And ExtJS made it too hard to adopt alternate application architectures, particularly those with a dramatically different way of thinking.

Unidirectional Data Flow

Now it's 2015, and suddenly everyone is being inspired by functional programming concepts. FP languages are getting more and more popular (hello Clojure!). We have Elm, Om and even the amazing Babel transpiler, which has led to huge practical improvements in JavaScript, particularly when it comes to FP. Therefore all the hype is slowly creeping into the JavaScript ecosystem, and those fancy academic concepts are going mainstream. People are even stealing ideas from Haskell, which is really cool.

The most important thing to take hold of the software development world in recent years is a concept called "unidirectional data flow". Facebook was the pioneer in bringing this concept to JavaScript with its Flux architecture, which it recommends as a way to build applications around its React UI framework.

The first time I saw Flux, I realized that it's here! Finally, my beloved CQRS/ES/DDD has made its way onto the front-end! Formally it wasn't a strict implementation of those techniques, but it was clearly heavily inspired by them.

I attended few presentations where people claimed Flux as a form of CQRS, yet almost nobody mentioned Event Sourcing. Anytime I asked a presenter about why they saw Flux as a form of CQRS, their answer was always the same: "Actions are Commands and Stores are accessed using Queries".

Commands are not Events

You should distinguish between those two terms: the former stands for the intention to do something, the latter indicates that this intention has been carried out. Typically, a Command may result in one or many Events.

Front-end development is naturally event-oriented, as any interaction with the UI is basically an event. Think about it: whenever you want to observe some interaction with the UI, you always have to attach an event listener (addEventListener).

But it's not only about Event listeners. In a well-designed front-end application with good UX, you never throw an exception while handling an Event since this wouldn't provide the user with any feedback. Showing an appropriate error message is a UI response as well. Therefore you can't "deny" the fact that some event has occured by throwing an exception. It has happened and there is no way to take it back unless the user performs some counter-action... which is again an Event.

We think of interaction on the back-end differently. Let's take creating an order as an example. Creating an order on the back-end is a typical Command which may eventually (depends on the implementation) result in many intermediate steps. First we create the order itself, then all the line items are added (triggering Events such as ORDER_CREATED and LINE_ITEM_ADDED). These events must be handled in a single transaction because we need atomicity on the back-end.

On the front-end, the user is responsible for the atomicity of the operation. They add those line items by filling in some text fields, and once they are done with the order they submit the form. You might say that all the Events happen in single "UI transaction", where a UI transaction is actually the user's intent to do something such as create an invoice.

It's quite common to perform business invariant assertion in Command Handlers on the back-end. The order example may throw an exception and stop its execution if some preconditions are not met (let's say one of the products being ordered is not in stock). Because the operation is atomic, none of the associated Events is emitted.

This can't simply happen on front-end. Consider two naive implementations:

a) The user adds a product that is not available. Once they try to submit the form, a validation error is shown. The user's decision to submit the form has already take place but the UI transaction failed.

b) The user can't even add the product because the UI displays it as unavailable. Again, this results in the UI transaction failing.

Get used to it: there should be no Commands on Front-End. It is far better to think about the interaction with the UI in terms of Events. As a result, we should consider Flux as more akin to Event Sourcing than CQRS.

The Human Brain is a Vital Front-End Component

In a typical back-end CQRS/ES/DDD implementation there are Commands which are handled by Handlers that emit Events. As previously discussed, we describe interaction with the application UI as Events rather than Commands. So where are the Commands in a front-end application?

This question bothered me for a long time. When I finally hit upon the answer, I found it both surprising and convincing: a person who is interacting with the user interface is actually responsible for creating the Command, and it occurs inside their head. When they go to click a button, it is this intention that is the Command. Once they click the button, there is no way to revert that action. It has already happened, and therefore the actual click is an Event.

Therefore the Human Brain is a vital front-end component that shouldn't be neglected in our application architecture. It makes perfect sense to treat UI interactions as Events and not as Commands, because this accurately describes the nature of both Events and Commands in the context of CQRS/ES/DDD.

This simple realization uncovers a few nice facts about front-end architecture based on unidirectional data flow (especially Redux) that could help us to find an elegant solution for a really hard problem: that of Side Effects. The Redux community is still struggling to find the right way to handle asynchronous effects such as API calls that are triggered by Events inside the application. This will be the topic of my next post.

http://feedproxy.google.com/~r/justdiscourse/browsing/~3/0CmtDOabkMs/


Yunier Jos'e Sosa V'azquez: Conoce los complementos destacados para diciembre

Вторник, 08 Декабря 2015 г. 16:00 + в цитатник

Lleg'o el 'ultimo mes del a~no y de esta forma estamos despidiendo los complementos destacados para Firefox en el 2015. Este a~no fue genial y conocimos de complementos 'utiles que nos ayudan en varios aspectos de nuestra vida cotidiana como por ejemplo las compras por Internet y el aumento de la privacidad mientras navegamos.

Elecci'on del mes: Fox Web Security

por: Oleksandr

Fox Web Security est'a dise~nado especialmente para bloquear sitios web catalogados como peligrosos y con contenido no deseado para ser visto por ni~nos. Este complemento es ligero y gracias a la integraci'on con los antivirus ofrece un alto nivel de protecci'on.

!Este complemento es extraordinariamente r'apido y efectivo!  Puedes decir adi'os a los sitios de porno, escaneo y virus. Ahora mi navegaci'on es absolutamente segura.

Fox Web Security bloqueando un sitio web con virus. Fox Web Security bloqueando un sitio web con material sexual.

Instalar Fox Web Security »

Tambi'en te recomendamos

YouTube™ Flash-HTML5

por A Ulmer

YouTube™ Flash-HTML te permite reproducir los videos de YouTube con Flash o el reproductor de HTML5.

Escogiendo el tipo de reproductor que deseamos para ver los videos

Escogiendo el tipo de reproductor que deseamos para ver los videos

AdBlock for YouTube™

por AdblockLite

AdBlock for YouTube™ elimina toda la publicidad de YouTube.

AdBlock for YouTube eliminando un anuncio

AdBlock for YouTube eliminando un anuncio

1-Click YouTube Video Download

por The 1-Click YouTube Video Download Team

1-Click YouTube Video Download te permite descargar f'acilmente los videos de YouTube a partir de los formatos disponibles.

YouTube Video Download mostrando la lista de videos disponibles para descargar

YouTube Video Download mostrando la lista de videos disponibles para descargar

Nomina tus complementos favoritos

?No sabes c'omo? S'olo tienes que enviar un correo electr'onico a la direcci'on amo-featured@mozilla.org con el nombre del complemento o el archivo de instalaci'on y los miembros evaluar'an tu recomendaci'on.

Esto ha sido todo por este a~no. En el 2016 te mostraremos nuevos e interesantes complementos para tu navegador. Si te ha gustado, por favor comp'artelo en tus redes sociales

http://firefoxmania.uci.cu/conoce-los-complementos-destacados-para-diciembre-2015/


The Mozilla Blog: Focus by Firefox – Content Blocking for the Open Web

Вторник, 08 Декабря 2015 г. 07:17 + в цитатник

Today we are launching Focus by Firefox, a free content blocker for Safari users on iOS 9. The app allows users to control their data flow by blocking categories of trackers such as those used for ads, analytics and social media and allows increased performance on mobile devices by blocking Web fonts.

We want to build an Internet that respects users, puts them in control, and creates and maintains trust. Too many users have lost trust and lack meaningful controls over their digital lives. This loss of trust has impacted the ecosystem – sometimes negatively. Content blockers offer a way to rebuild that trust by empowering users. At the same time, it is important that these tools are used to create a healthy, open ecosystem that supports commercial activity, instead of being used to lock down the Web or to discriminate against certain industries or content. That’s why we articulated our three content blocking principles.

We’ve now put these principles into action. We made Focus by Firefox because we believe content blockers need to be transparent with publishers and other content providers about how lists are created and maintained, rather than placing certain content in a permanent penalty box. We want this product to encourage a discussion about users and content providers, instead of monetizing users’ mistrust and pulling value out of the Web ecosystem. Focus by Firefox is free to users and we don’t monetize it in other ways.

For many content blockers, the standards used to determine what gets blocked aren’t clear. They aren’t transparent about their choices. They don’t provide ways for blocked content providers to improve and become unblocked. And some content blockers remove companies from a list in exchange for payment.

With Focus by Firefox, we are taking a different approach. To do this, we’ve based a portion of our product on a list provided by our partner Disconnect under the General Public License. We think Disconnect’s public list provides a good starting point that demonstrates the value of open data. It bases its list on a public definition of tracking and publicly identifies any changes it makes to that list, so users and content providers can see and understand the standards it is applying. The fact that those standards are public means that content providers–in this case those that are tracking users–have an opportunity to improve their practices. If they do so, Disconnect has a process in place for content providers to become unblocked, creating an important feedback loop between users and content providers.

Content blocking is new terrain for us and we don’t have all the answers yet. As an industry, we need to figure out how to make these feedback mechanisms much more robust, so that content providers have a stronger incentive to put users in command of their online experience. And we need to understand better what users want. Some care about privacy. Others on mobile care about performance. So while Focus by Firefox is launching geared towards giving more choice over tracking, we plan to provide control over performance and data usage.

As we innovate on this product, we’ll be transparent about our decisions and work to create and improve those feedback loops between users and content providers. This is how we believe blocking tools can strengthen the commercial activity that underlies the Internet while giving users control and earning back their trust.

Download_on_the_App_Store_Badge_US-UK_135x40

https://blog.mozilla.org/blog/2015/12/07/focus-by-firefox-content-blocking-for-the-open-web/


Nick Cameron: Macro plans - syntax

Вторник, 08 Декабря 2015 г. 05:26 + в цитатник

I would like to improve the syntax for both macros by example and procedural macros.

For macros by example, there is a soft backwards compatibility constraint: although new macros do not have to be backwards compatible with old macros, they should be as close as possible to make the transition as easy as possible for macro authors. Ideally, if a macro does not violate hygiene via the current bugs, then it should be usable as a new macro with no changes.

Currently, syntax extensions have no special syntax (they are just Rust code). Although there are attributes for importing macros, declaring macros is fully programmatic. I would like to introduce some syntax for making declaration easier.

Macros by example

We first need to declare macros using a new name, I propose macro. There is then the question of whether we should continue to treat macro definitions as macro uses of a special macro, or to treat them like any other language item. I prefer the latter, and thus propose macro rather than macro!. We also need to handle privacy for macros, and I propose pub macro for this, in line with other items. Although I don't propose doing so now, we might later also want unsafe macro.

I would like to make the semi-colons between pattern arms optional (if the user uses braces for the right-hand side). We could consider semi-colons deprecated and disallow them before stabilisation.

I would like to make the kinds of brackets usable in macros stricter. I propose that the body of a macro and the right-hand side of macros must always use braces, and the semi-colon terminated form of macros is no longer allowed. We will continue to accept any kind of bracket ((), [], {}) around the pattern, but the kind of bracket must match the use. Whilst new macros are being stabilised, these changes should cause deprecation warnings rather than errors to make adoption of the new macro system easier.

Example, old macros:

macro_rules! foo (  
    (bar) => { ... };
    ($x: ident) => ( ... );
);

new macros:

pub macro foo {  
    (bar) => { ... }
    ($x: ident) => { ... }
}

I would also like to make it easier to write macros with a single pattern. These are common and often small, so the boilerplate of writing the current pattern matching style is annoying.

For single-pattern macros I propose a more function-like style, eliding the outermost braces and =>. For example,

macro_rules! foo (  
    ($x: ident) => ( ... )
);

could be written as

pub macro foo($x: ident) {  
    ... 
}

Finally, I propose that declaration of macros is only allowed in item position.

Macro use stays unchanged.

Parsing

macro is a keyword, so I believe that parsing macros as items should not present any major difficulty. The only slight niggle is distinguishing the two forms, in particular when the pattern is pathological:

macro foo {() => { ... }}  

vs

macro foo {() => {}} { ... }  

To parse, after finding the macro keyword and a name, we skip a token tree and look at the next token, if it is a {, then we must be parsing the simple form and the skipped token tree is the pattern. Otherwise, we are parsing the complex form, and the skipped token tree is the body of the macro.

Alternatives

Keep macro defintion as a macro use using macro!. This has the 'advantage' of allowing macros to be defined in non-item position. I don't think there is a great use case for this. The other motivation for this is that macros by example should be implementable as a procedural macro. Whilst this is a noble goal, I don't think it actually means we should take this path. In particular, we could view macro expansion as involving a desugaring step and then expansion, where everything other than the desugaring step could be implemented as a procedural macro. Thus, we have the proof that procedural macros are awesome (and an alternative macro system could be implemented as a procedural macro), but the standard macro system has some syntactic advantages (as one would expect for an integral part of the language). I'm not actually proposing implementing macros using such a desugaring to procedural macros, but we could if there is demand.

Replace the $x syntax for arguments. The dollar-signed variables are ugly, make macros look unlike other Rust code, and have little precedent from other languages. However, changing this part of the syntax would have backwards compatibility issues. Furthermore, we cannot use just x (if we want to continue to allow pattern-matching) since we then couldn't distinguish between variables and literals in patterns. So, it is not obvious that there is a better syntax.

It has been suggested that we could have macros without patterns, e.g, let x = foo!; rather than let x = foo()!;. This would have some readability advantages and shouldn't complicate parsing. It does however, add complexity and I am not sure if the gains are worth it (in particular, we don't support any equivalent for functions).

Include the ! in the name of the macro, e.g., pub macro foo!() { ... }. Not sure how I feel about this, this sort of follows the precedent of lifetimes where we use ' in both the declaration and use. The backwards compatibility issue seems minor here. I worry that when parsing (or reading), it is hard to distinguish a macro use from a macro declaration (especially if we allow macros in identifier positions).

Finally, we might consider a more dramatic change to the syntax, which might perhaps fix the $x problem mentioned above. However, this would have big backwards compatibility concerns, the new problems we introduce might not be better than the old problems we fix, and it would be a lot of work to design and implement for relatively little gain.

Procedural macros

I propose that procedural macros are Rust functions marked with the attribute #[macro] or #[macro_attribute]. The former gives a macro used with function-like syntax with the name of the function, the latter a macro used with attribute-like syntax. #[cfg(macro)] indicates that an item should be compiled in the plugin phase only. For example,

#[cfg(macro)]
pub mod macros {  
    #[macro]
    pub fn foo(...) -> ... { ... }

    #[macro_attribute]
    pub fn bar(...) -> ... { ... }
}

#[::macros::bar]
fn main() {  
    ::macros::foo!();
}

Initially, only an entire crate may be marked as #[cfg(macro)] (i.e., the above example would not be legal). I hope we can ease this restriction later.

I'll save the details of the arguments that a procedural macro function takes and of naming macros for later.

Alternatives

We could have a syntax closer to that for macros by example. However, we would still need to pass some context to the macro and distinguish the macro as a procedural one (i.e., whether the code should be executed or copied when expanded). I believe that using proper functions is therefore nicer. We could use the macro keyword, rather than an attribute to mark the function as a macro (thus making them a little more similar to macros by example). However, we would then need some other way of marking attribute-like macros.

We could implement a trait rather than using an attribute (as we currently do). I believe using an attribute is more appropriate here - it is more syntactic than a trait and macros are part of the syntactic world. This also means that the parser can distinguish macros from other functions, thus allowing them to be mixed in the same crate if desired.

http://www.ncameron.org/blog/macro-plans-syntax/


Yunier Jos'e Sosa V'azquez: En video: mira lo que sucede con nuestros datos

Понедельник, 07 Декабря 2015 г. 16:00 + в цитатник

La Internet fue hecha para todos, pero est'a siendo amenazada por las grandes corporaciones que est'an transformando a las personas en sus productos sin su conocimiento o consentimiento. Te sorprender'as cuando tiran de la cortina y nos muestran c'omo las grandes corporaciones est'an observando todos nuestros movimientos.

Aunque parece una pel'icula, actualmente nuestra informaci'on privada es tratada con fines lucrativos y las grandes empresas se aprovechan debido al gran c'umulo de datos que diariamente dejamos plasmados en sitios web. Si deseas aprender a c'omo protegerte del rastreo en l'inea, visita esta p'agina publicada en Mozilla. Mira lo que pasa cuando los negocios ocultos de Internet son expuestos en este video.

Instala Firefox y toma el control nuevamente de tus datos.

Fuente: Mozilla Hispano

http://firefoxmania.uci.cu/en-video-mira-lo-que-sucede-con-nuestros-datos/


About:Community: Participation, yes and, community

Понедельник, 07 Декабря 2015 г. 16:00 + в цитатник

The

Photo by Marcia Knous


I’ve noticed that the terms “participation” and “community” have become more or less interchangeable in the context of Mozilla. I’ve even had a conversation with a Mozilla executive who said “I thought they were the same.”

It seems that the term “community” has been somewhat deprecated within Mozilla because it is ambiguous, in that it can mean different things to different people, which can lead to miscommunication. For example, some people use “community” as a shorthand for “volunteers” while others very intentionally use it to mean all Mozillians, paid and non-paid.

I see “participation” and “community” as related but distinct concepts, both of which are essential to Mozilla. Let’s start with definitions (sorry, I’m a word nerd). The Participation Team’s wiki page says:

Participation is when people can freely contribute their energies, time and ideas to support Mozilla’s mission.

So far, so good.

Here’s my generic definition of “community”:

A community is a group of people who interact and develop a network of relationships around a common factor.

The “common factor” is the wild card that leads to a wide variety of types of communities. If the common factor is physical location, the community can be a traditional geographic community. Some other types of communities include:

So, what kind of community is Mozilla? I consider Mozilla to be a community of action united by the Mozilla mission, which is the shared goal. Members of the Mozilla community take action (that is, participate) in order to realize that mission. However, Mozilla is really a constellation of overlapping sub-communities based on geography, interest, or practice. The nature of the sub-community determines the type of action that its members typically do. For example, a localization community shares a particular language, such as French, and its members take action by localizing software, websites, or documentation. A community for a functional area, such as QA, has some characteristics of a community of practice, but members focus on the practice of their skills on behalf of Mozilla.

Other key concepts in my definition of “community” are interaction and relationships. People who share a common factor, but don’t interact about it or have relationships with one another as a result, are not, for the purposes of this discussion, a community. (I know this excludes another widespread use of the word. That’s why I gave my definition.)

So, to bring the concepts together:

Participation is the “what” of the Mozilla community; community is the “how” of Mozilla participation.

Or rather, community, in my opinion, should be the “how” of participation. If people simply participate, but don’t interact and build relationships with others (that is, become part of a community), they are unlikely to sustain interest in participating over a more than, say, a few months. Therefore, when creating opportunities for participation, one needs to build-in opportunities for interaction, in order for the set of participants to develop into a community. When participants have relationships with one another, they are more likely to keep coming back to participate, making the group sustainable, that is, retaining more people than it loses.

Participation and community don’t necessarily go hand in hand; ensuring that they do requires careful design of participation opportunities.

The canonical example of participation in open source software is contributing a code patch. A programmer finds something that needs fixing, gets a copy of the codebase, fixes the bug, submits a patch for review, and ideally has the patch accepted into the main codebase. At the very least, this programmer has to interact with a reviewer. Ideally, she interacts with other members of the project earlier and more extensively than just at review time, through mentorship and discussion of the bug. She might also subscribe and post to a relevant mailing list, and read and comment on other bugs. Through these interactions, repeated over time, she becomes part of the community for the open source project.

However, not all participation opportunities have interaction built-in. For example, Mozilla Developer Network (MDN) is a wiki of developer documentation. Because it uses a pure wiki model, no review is needed for a change to go live on the site. It’s quite possible for someone to create an account and make many edits to the site without ever interacting with another human being. Such contributors are participating, but are not engaged with the MDN community. As community manager for MDN, one of my challenges is to draw contributors like that into communication channels such as mailing lists or IRC, and encourage them to interact with other members of the MDN community.

As your Mozilla team sets goals for 2016, with an eye towards inviting participation, I encourage you to also look for ways to invite and encourage interaction among participants, especially if open participation is new for your team. This could be as simple as creating a category on the Mozilla Community Discourse site, and encouraging participants to post there via welcome emails. Of course, make sure that existing team members initiate and respond to discussions there as well; they are part of the community, too. Communities don’t happen overnight, and they require (repeated, sustained) help in order to grow. The payoff is in sustained participation, as well as in the intangible benefits of relationships with fantastic Mozilla contributors.

http://blog.mozilla.org/community/2015/12/07/participation-yes-and-community/


Marco Zehe: Looking at the accessibility of the IRCCloud service

Понедельник, 07 Декабря 2015 г. 14:18 + в цитатник

In recent months, I’ve started using the IRCCloud service for all my communication via Internet Relay Chat (IRC). We use IRC at Mozilla, and many other open source projects as well as the W3C use IRC for their instant communication needs.

Previously, I had been using typical IRC clients such as Adium for OS X or ChatZilla as an add-on to Firefox. But that caused a lot of problems when working from multiple machines, like not being able to easily switch fron one to the other without losing and having to rebuild context, or logging in with two separate nicknames etc.

A while ago, IRCCloud started appearing at the peripherals of my filter bubble. It is a cloud-based IRC client with a few extras. And in the spring, I decided to finally check it out myself. We’re running an instance of the Enterprise Edition at Mozilla. So I got an account, and off I went.

What is it?

Here is what the team says about the service. IRCCloud is offered as both a web application and native apps for iOS and Android. The native apps offer push notifications for nickname mentions as well as keyword highlights, meaning one can react to things even on the go without sitting in front of the computer. IRCCloud comes in two flavors: One instance that is hosted at irccloud.com, which allows individuals and teams to connect to IRC via IRCCloud’s hosted service, and the Enterprise Edition, which is usually self-hosted by the organization purchasing the annual subscription. The latter is what Mozilla is using, but I have been looking at both instances by now, since they differ slightly. The instance at IRCCloud itself is usually newer, has the latest and greatest features, while the Enterprise editions can lag a bit behind.

IRCCloud works like this: You log onto the service, and from there, manage your different IRC server connections with nicknames, registered nick passwords etc. The paid accounts offer you an unlimited number of connections. The free edition allows for two except the one to irc.irccloud.com. The Pricing page has more information and details.

Once connected, the connections are permanent. IRCCloud keeps you logged into all your IRC servers and collects incoming chats even when you are currently not logged into the IRCCloud service. But when you come back, either through the web site or mobile app, you can review everything that has happened in your absence. This includes private messages that wait for you just like on any other instant messaging service.

The accessibility

When I first started using our Enterprise instance, I found the web application to be very accessible. All the options were easily reachable with the NVDA virtual cursor in Firefox, or also Safari or Chrome + VoiceOver on OS X. IRCCloud supports the standard IRC commands that start with a slash / character. The entry happens in a standard multiline text area, the output in a part of the web page right above that text area. Adding servers, channels and starting private conversations were all manageable through controls present on the web page. Also the nice additions like paste bins for multi-line text, uploading of images, embedding of tweets etc. all worked nicely.

However, with a redesign in September of 2015, the web app hosted on irccloud.com lost some of this accessibility due to things now being hidden behind mouse hovers, or elements turning from semantic links to non-semantic text clickables. At first, I didn’t make the connection, but at some point it dawned on me that, if Mozilla pulled in these changes at some point, I and other current and future visually impaired or blind Mozilla employees would lose that functionality as well. So I decided to contact the IRCCloud team.

Over the past week or so, I worked with James, one of the founders of IRCCloud, who works on the front-end portion, to resolve the (for me) most urgent issues in the IRCCloud instance. This is already live, and anyone can try it out. The things we covered were:

  • Clickable text that should be buttons were made buttons via WAI-ARIA.
  • Those items that expand to a multitude of options such as the account e-mail address or the Options buttons, indicate their states via aria-expanded. Those that open to a non-modal pop-over such as every nick name anywhere in chats and private messages, will have a sub menu indicator. Those sub menus aren’t really sub menus, but rather pop-overs with a multitude of options such as being able to directly send a private message, finding out who that person is etc.
  • The channels and servers as well as each private message nickname are now announced as tabs. This seemed to us the most appropriate mapping, since only one channel, server output or private message is usually visible at a time, but radio buttons didn’t seem really appropriate to use. James kept the heading structure in place, so that every server instance starts with a new heading.
  • Things that were previously hidden behind mouse hovers such as the Options button for an individual IRC server entry are now shown when the server name is focused via the virtual cursor or other browsing mechanism. With NVDA, this will cause the virtual cursor to land on the Options button once the button becomes visible, with the server name moving down one line in the virtual buffer.
  • Options in the individual server settings, accessible from the above mentioned Options button, are now toggle buttons if they are toggleable. Other buttons remain normal buttons as appropriate. But now, the state is clearly communicated whether an option such as embedding social links etc. are on or off.
  • In channel views,  if there is a series of status messages that are collapsed, the button to expand these into individual entries is now accessible, and it indicates whether the status messages are currently expanded or collapsed.

This is by far not meant as a full audit, nor was it meant to be a full consulting service. I pointed out to James the most low-hanging fruits that made working with IRCCloud difficult for me after the redesign. James was super responsive in adding the appropriate attributes and also adding his own ideas and asking good questions about details here and there. What we did not at all tackle, were issues for keyboard-only, non-screen reader users or low-vision users. I do not, for example, know if contrast is good in all places. But what we did can certainly be viewed as a starting point for dealing with other issues that might cause headaches for some users. Interest and willingness to fix these issues is definitely there, so don’t be shy if you want to use IRCCloud and run into problems!

The iOS app

Even before working with James on the web app, I contacted the team with some problems I had found in the pre 2.0 version of the iOS app. The iOS app had, for example, a problem with not updating VoiceOver content with the new loaded content after switching channels. However, since buttons were labeled and other attention to detail had been given, I was sure this was only an oversight. Sam, the IRCCloud team member responsible for the mobile apps, came back with a very fast response, and we worked together throwing back and forth observations and new beta builds, and within a day, in time for the recent 2.0 release of the iOS app, the problems were solved and some more improvements were made. Now, the iOS client which runs on both iPhone and iPad is very accessible. In fact I am not aware of any area of the app right now that I cannot use.

Wait, what about the Android app?

Sorry, I didn’t look at that one. I currently use Android only for testing Firefox, not as aproductive device of any kind, so looking at that had no priority for me among all the other things I have to take care of in my day to day job at Mozilla.

What about Slack or Gitter?

Gitter is a chat service built around the Github eco system and is very developer-centric. Its accessibility on the web is limited, and I know nothing about the iOS or Android apps. It offers an IRC bridge, so one can interact with one’s rooms on Gitter via IRC.

Slack is also an enhanced chat service that offers team collaboration and embedding of many kinds of media, file transfers etc. I tested Slack when I wanted to join the a11y Slackers, but was very unhappy with the service. The web service is all non-semantic clickables, and the iOS app behaved very wacky in that it showed me the chat history in a seemingly random order. I ended up using the Gitter to IRC bridge and my Mozilla-hosted IRCCloud instance to connect to it and am now always-on there.

Both these services are centralised, proprietary messaging services unlike IRC, which is an open, distributed system of different networks that don’t depend on one another. So even when IRCCloud should suffer an outage, not the whole system becomes unavailable at once, and you can always use an alternative client as an emergency backup to connect to your favorite IRC servers. IRCCloud does offer additional features like file storage, embedding of social links, pastebins and (if you want to) videos, bringing it up to the level of other proprietary solutions. And some parts of IRCCloud are even open-source!

Disclaimer

Some might think that this post contains a lot of information that could be conceived as marketing. I am not associated with IRCCloud in any other way than that I use their service, and I do not receive any compensation for writing this post. This post purely reflects my own opinion about their service. Their responsiveness to accessibility issues and fixing them in a speedy fashion is commendable. I like this service, that’s all.

https://www.marcozehe.de/2015/12/07/looking-at-the-accessibility-of-the-irccloud-service/


Christian Heilmann: People on the Edge: Gaurav Seth

Понедельник, 07 Декабря 2015 г. 08:56 + в цитатник

In a new series of posts, I want to introduce the world to people I work with. People who work on the Microsoft Edge browser and related technologies. The faces and voices behind the product.

Gaurav Seth on stage

Today we have an interview with Gaurav Seth (@gauravseth). Gaurav is a program manager on the Chakra JavaScript engine, which – amongst other things – is in use in Edge to make the web just work. Gaurav also just made a splash at the JSConf Last Call conference in the US announcing that in the new year, ChakraCore will be fully open source and available for use and contribution.

The video is on YouTube.

There’s also an audio version on Archive.org.

In this interview, you’ll hear Gaurav try to answer my questions about the following topics:

  • What’s the difference between Edge and Chakra
  • Why does it make sense to look further than V8 when it comes to server-side JavaScript?
  • How does interoperability work across JavaScript engines?
  • Who is involved in making JavaScript engines behave, stay backwards compatible and not break the web?
  • How can JavaScript engines solve the problem of ES6 breaking in older browsers? Is it up to developers to make that easier?
  • How can ES6 get faster? What can developers do to make it happen?
  • How evergreen browsers help the adoption of ES6.
  • How to meet the Chakra team and get new things into the engine.
  • How writing bad code on the web inspired a faster JavaScript
  • How minification caused slow JavaScript execution and how the team fixed that issue
  • What’s the future of mulithreaded JavaScript and ASM.js?

Thanks must go to Gaurav for answering my questions and Seth Juarez and Golnaz Alibeigi of Channel 9 for filming and producing the series. There are a few cuts in this one, which was because of noisy people in the background, so sorry about that.

https://www.christianheilmann.com/2015/12/07/people-on-the-edge-gaurav-seth/


Zack Weinberg: Bootstrapping trust in compilers

Суббота, 05 Декабря 2015 г. 20:10 + в цитатник

The other week, an acquaintance of mine was kvetching on Twitter about how the Rust compiler is written in Rust, and so to get started with the language you have to download a binary, and there’s no way to validate it—you could use the binary plus the matching compiler source to recreate the binary, but that doesn’t prove anything, and also if the compiler were really out to get you, you would be screwed the moment you ran the binary.

This is not a new problem, nor is it a Rust-specific problem. I recall having essentially the same issue back in 2000, give or take, with GNAT, the Ada front-end for GCC. It is written in Ada, and (at the time, anyway) not just any Ada compiler would do, you had to have a roughly contemporaneous version of … GNAT. It was especially infuriating compared to the rest of GCC, which (again, at the time) bent over backward to be buildable with any C compiler you could get your hands on, even a traditional one that didn’t support all of the 1989 language standard. But even that is problematic for someone who would rather not trust any machine code they didn’t verify themselves.

One way around the headache is diverse recompilation, in which you compile the same compiler with two different compilers, then recompile it with itself-as-produced-by-each, and compare the results. But this requires you to have two different compilers in the first place. As of this writing there is only one Rust compiler. There aren’t that many complete implementations of C++ out there, either, and you need one of those to build LLVM (which Rust depends on). I think you could devise a compiler virus that could propagate itself via both LLVM and GCC, for instance.

What’s needed, I think, is an independent root of correctness. A software environment built from scratch to be verifiable, maybe even provably correct, and geared specifically to host independent implementations of compilers for popular languages. They need not be terribly good at optimizing, because the only thing you’d ever use them for is to be one side of a diversely-recompiled bootstrap sequence. It has to be a complete and isolated environment, though, because it wouldn’t be impossible to propagate a compiler virus through the operating system kernel, which can see every block of I/O, after all.

And it seems to me that this environment naturally divides into four pieces. First, a tiny virtual machine. I’m thinking a FORTH interpreter, which is small enough that one programmer can code it by hand in assembly language, and having done that, another programmer can audit it by hand. You need multiple implementations of this, so you can check them against each other to guard against malicious lower layers—it could run on the bare metal, maybe, but the bare metal has an awful lot of clever embedded in it these days. But hopefully this is the only thing you need to implement more than once.

Second, you use the FORTH interpreter as the substratum for a more powerful language. If there’s a language in which each program is its own proof of correctness, that would be the obvious choice, but my mental allergy to arrow languages has put me off following that branch of PL research. Lisp is generally a good language to write compilers in, so a small dialect of that would be another obvious choice. (Maybe leave out the call/cc.)

Third, you write compilers in the more powerful language, with both the FORTH interpreter and more conventional execution environments as code-generation targets. These compilers can then be used to compile other stuff to run in the environment, and conversely, you can build arbitrary code within the environment and export it to your more conventional OS.

The fourth and final piece is a way of getting data in and out of the environment. I imagine it as strictly batch-oriented, not interactive at all, simply because that cuts out a huge chunk of complexity from the FORTH interpreter; similarly it does not have any business talking to the network, nor having any notion of time, maybe not even concurrency—most compile jobs are embarrassingly parallel, but again, huge chunk of complexity. What feels not-crazy to me is some sort of trivial file system: ar archive level of trivial, all files write-once, imposed on a linear array of disk blocks.

It is probably also necessary to reinvent Make, or at least some sort of batch job control language.

https://www.owlfolio.org/research/bootstrapping-trust-in-compilers/


Robert O'Callahan: CppCast rr Podcast

Суббота, 05 Декабря 2015 г. 11:09 + в цитатник

This week I took part in a podcast interview with the folks of CppCast, about rr. It was a lot of fun... Check it out.

http://robert.ocallahan.org/2015/12/cppcast-rr-podcast.html


Michael Kohler: MozCoffee Framework – Wie organisiere ich ein Meetup?

Суббота, 05 Декабря 2015 г. 00:49 + в цитатник

Einf"uhrung

Dieses Dokument soll als kurzes Tutorial dienen, um ein erfolgreiches Meetup zu organisieren. In diesem Dokument werden die wichtisten Punkte erl"autert und einige Tipps gegeben, wie das ganze organisiert werden kann. Es besteht kein Anspruch auf Vollst"andigkeit. Zudem ist allen Mozillians selbst "uberlassen, wie die Organisation stattfindet.

Was ist das Ziel des Meetups?

Als erstes sollte man sich "uberlegen, was das Ziel f"ur das Meetup ist. M"ogliche Fragen dazu sind:

  • Ist es ein Meetup, um neue Mitwirkende in die Community zu integrieren?
  • Ist es ein Meetup, um sich innerhalb der bestehenden Community besser kennenzulernen und ein regelm"assiger Ideenaustausch zu gestalten?
  • Soll dies “generell Mozilla” oder ist es eine Serie von m"oglichen, verschiedenen Themen?
  • Wem will ich was bieten? Dies kann Mitarbeiter im B"uro, Kollegen/Freunde oder auch Fremde sein.

Anhand dieser Fragen, kann die Agenda mit m"oglichen Themen zusammengestellt werden.

M"ogliche Themen / Formate

F"ur das Meetup gibt es grundlegend zwei m"ogliche Formate. Entweder kann es sich um eine “Diskussionsrunde” handeln, bei der allgemeine Themen besprochen werden. In den meisten F"allen kann dies als “Zusammenkunft gleicher Interessen” bezeichnet werden. So kann einfach auf Fragen von Teilnehmenden eingegangen werden. Dies ist die einfachere Version, welche weniger Organisationsaufwand ben"otigt und eignet sich gut f"ur ein erstes Meetup. Hierf"ur ben"otigt man auch keine Agenda, h"ochstens ein paar Themen, die man ansprechen kann.

Das andere Format ist ein Vortrag-basiertes Meetup. Hier spielt es grunds"atzlich keine Rolle, welche Themen besprochen werden. Wichtig ist, dass es sich um ein Thema handelt, bei welchem du dich wohl f"uhlst und gerne Auskunft gibst.

M"ogliche Themen f"ur Vortr"age (kann von 10 bis 60 Minuten reichen):

  • Mozilla generell (Mission, Struktur, Gemeinschaft)
  • Was sind die m"oglichen funktionalen Gebiete, bei denen sich man engagieren kann?
  • Produkt-spezifisch, z.B. Firefox, Firefox OS, Webmaker, ..
  • Gebiet-spezifisch, z.B. UX, Design, Coding, Lokalisierung, ..
  • Web Developer spezifische Talks, wie z.B. “Demo Firefox Developer Tools”
  • Netzneutralit"at, Privatsph"are

F"ur beide Formate gilt: es macht nichts, wenn du auf eine Frage nicht antworten kannst. Auch “ich weiss, wen ich da fragen kann und ich werde mich bei dir melden” ist eine gute Antwort.

M"ogliche Agenda mit Vortr"agen

  • 18:30 Eintreffen
  • 18:40 kurze Intro zu “Was ist Mozilla?”
  • 18:45 Vortrag
  • xx:xx Fragerunde
  • danach gem"utliches Beisammensein und Diskussionsrunde

Dies ist nat"urlich nur ein Vorschlag. Wenn es sich herausstellt, dass ein Treffen "uber den Mittag besser geeignet w"are, kann dies nat"urlich auch gemacht werden.

Was ist das gew"unschte Resultat des Meetups?

Eine wichtige Frage ist, was das Resultat des Meetups sein soll.

  • Generelle Information "uber Mozilla, Produkte, etc, damit die Leute informiert sind?
  • Gewinnen von neuen Contributorn?
  • Mischung aus beidem?

Um die weitere Planung zu vereinfachen, sollte hier 2-3 Ziele definiert werden. M"ogliche Beispiele sind:

  • Am Ende des Meetups wissen 5 weitere Personen f"ur was Mozilla einsteht und wie man helfen k"onnte
  • Am Ende des Meetups sind 2 Personen interessiert bei Mozilla mitzumachen und wissen, wo sie beginnen k"onnen
  • 10 neue Personen werden Firefox zuhause runterladen und ausprobieren
  • 2 Personen erz"ahlen ihren Freunden vom Meetup und laden diese zum n"achsten Treffen ein

Gr"osse

Anhand des Themas kann die Gr"osse des Meetups ungef"ahr abgesch"atzt werden. Am Anfang werden die Meetups etwas kleiner ausfallen, da diese noch nicht so bekannt sind. Dies ist aber absolut kein Problem! Auch kleinere Meetups k"onnen Spass machen und andere Personen wichtige Informationen "uber Mozilla vermitteln.

Die Gr"osse gegen oben ist offen, ben"otigt aber mehr Organisationsaufwand, je gr"osser das Meetup wird.

Geeignete Location finden

Anhand der Gr"osse und Thema kann nun ein geeigneter Ort f"ur das Meetup gesucht werden. Dies sollte zur Sicherheit mind. 2 Wochen vor dem Meetup erledigt werden. So kann sichergestellt werden, dass alle Teilnehmer wissen, wo das Meetup stattfinden wird.

Level 1 (bis zu 8-10 Personen): Kleinere Meetups k"onnen ohne Probleme in Restaurants durchgef"uhrt werden. Hierbei ist jedoch zu beachten, dass es sich nicht um ein zu "uberf"ulltes Restaurant handeln sollte, damit Gespr"ache m"oglich sind. Einander anzuschreien bringt nichts ;) Achtung: Reservierung nicht vergessen, damit auch gen"ugend Platz vorhanden ist. Orte wie Starbucks funktionieren auch wunderbar.

Level 2 (f"ur Vortr"age oder ab 10 Personen): F"ur Vortr"age oder bei gr"osseren Meetups wird zwingend ein eigener Raum ben"otigt. In den meisten F"allen haben Universit"aten abends freie R"aume, die man (wenn man lieb fragt), gerne f"ur ein Meetup ben"utzen darf. Als Alternative kann auch der Arbeitgeber gefragt werden, ob ein Sitzungszimmer daf"ur verwendet werden darf. Falls beides nicht m"oglich ist, k"onnen auch andere Firmen angefragt werden. Webentwickler-nahe Firmen hosten in vielen F"allen gerne Meetups.

Level 3 (l"angerfristig): wenn absehbar ist, dass es in Zukunft weitere, regel"assige Meetups geben wird, ist es sinnvoll, sich nach einer l"angerfristigen L"osung umzusehen. Falls in “Level 2” eine M"oglichkeit gefunden wurde, kann man den Anbieter des Raums fragen, ob man mit einer Frist von n Wochen da jederzeit (sofern verf"ugbar) den Raum haben d"urfte.

Meetup durchf"uhren

Hier gibt es nur etwas zu sagen: habt Spass! Die Durchf"uhrung soll kein Zwang sein, sondern euch und den Teilnehmer Spass machen.

Nachfolgende Arbeiten

Um Teilnehmer "uber neue Meetups zu informieren, ist es n"otig, eine Kontaktm"oglichkeit zu haben. Dies kann ein Newsletter sein, eine meetup.com Gruppe oder auch einfach eine eMail-Liste.

Damit potentielle Mitwirkende optimal unterst"utzt werden k"onnen, ist es am Anfang n"otig, eine nahe Beziehung mit ihnen zu f"uhren und so gut wie m"oglich zu unterst"utzen.

Periodizit"at

Solange die Meetups regelm"assig stattfinden, spielt es keine Rolle, wie oft dies der Fall ist. Dies kann einmal im Quartal sein, oder einmal im Monat. Dies ist abh"angig von der Zeit, die man f"ur die Organisation aufwenden kann.

Werkzeuge / Promotion

Gibt es andere Stammtische, Meetups, etc in dieser Stadt?

Gibt es in deiner Stadt andere Stammtische oder Meetups? Das findest du u.a. "uber meetup.com raus. Falls es welche gibt, w"are es sinnvoll, einen davon zu besuchen, um zu sehen, wie das da gehandhabt wird. Ist bereits ein Datum f"ur ein Mozilla Meetup bekannt, kann an diesen anderen Meetups auch Werbung daf"ur gemacht werden.

Gegebenenfalls gibt es auch die M"oglichkeit, Vortr"age bei anderen Meetups zu halten, um zu sehen, ob in dieser Stadt "uberhaupt Interesse besteht.

Meetup.com

F"ur regelm"assige Meetups kann auf meetup.com eine Meetup-Gruppe erstellt werden. Weitere Informationen dazu gibt es direkt auf meetup.com.

Beispiele:

Twitter / Soziale Medien

Die Promotion kann, sofern f"ur diese Stadt "uberhaupt sinnvoll, "uber Twitter und andere soziale Medien gemacht werden. Dabei ist es wichtig, dass man irgendwo eine Seite hat, die man in den Beitr"agen verlinken kann. Diese Seite sollte mind. eine Beschreibung, Datum und Ort erw"ahnen.

Budget

Normalerweise sollte es nicht n"otig sein, f"ur ein Meetup Budget zu erhalten. Sollte dies aber trotzdem n"otig sein, meldest du dich bei Michael Kohler, da dies "uber Reps l"auft.

Swag

Sticker sind ein gutes Mittel, um Leuten eine Freude zu bereiten. Wenn diese auf einem Laptop landen und Firefox promoten, umso besser. Falls ihr f"ur ein Meetup Swag ben"otigt, meldet euch bei Michael Kohler, da dies "uber Reps l"auft.

https://michaelkohler.info/2015/mozcoffee-framework-wie-organisiere-ich-ein-meetup


Mozilla Open Policy & Advocacy Blog: U.S. net neutrality is in the hands of the D.C. Circuit (again)

Суббота, 05 Декабря 2015 г. 00:28 + в цитатник

Today a United States appellate court in Washington, D.C. [heard] oral arguments over a lawsuit challenging the Federal Communications Commission’s (FCC) recent net neutrality order. We filed a joint amicus brief with CCIA supporting the order. The Internet needs a foundation of clear rules and authority to protect users and innovators from harmful blocking and throttling practices. If, on the other hand, the order is struck down, the U.S. Internet community will be back at square one, with little opportunity to engage with the evolving practices we are seeing today.

Twice before, this court (though with some different judges) has struck down FCC action on net neutrality; but both times, the principal reason was the source of authority supporting the action. In the current order under review, the FCC took the path supported by Mozilla, other organizations in civil society and the tech industry, and 4 million Americans, using its so-called “Title II” statutory powers to support the rules it adopted.

We engaged extensively in the FCC proceeding in support of Title II authority and of meaningful protections for the open Internet, including strong rules against blocking and discrimination of content, for both fixed and mobile Internet access services. We filed a written petition to the FCC, along with initial comments and reply comments. We followed that up by mobilizing our community, organizing global teach-ins on net neutrality. We also joined a major day of action and co-authored a letter to the President. And we’ve gone beyond the U.S. in our support of net neutrality, engaging in the European Union, Peru, and India.

The core argument in our amicus brief reflects our consistent support for net neutrality. Upholding the FCC’s order would preserve the status quo, reinforcing assumptions long held by end users and validating the policy balance and history associated with the concept of communications services. Striking the order, on the other hand, would unbalance the historical level playing field and undermine the pro-innovation and pro-competition framework that the open Internet provides, and which has led to tremendous socioeconomic benefits in the short time of its existence.

We hope the Court will uphold the Open Internet order as a foundation of protections for users, competition, and innovation, and we look forward to working with the FCC to address new opportunities and challenges for the Web as they arise.

https://blog.mozilla.org/netpolicy/2015/12/04/u-s-net-neutrality-is-in-the-hands-of-the-d-c-circuit-again/


Advancing Content: Advancing Content

Пятница, 04 Декабря 2015 г. 23:42 + в цитатник

One of the many benefits of the Web is the ability to create unique, personalized experiences for individual users. We believe that this personalization needs to be done with respect for the user – with transparency, choice and control. When the user is at the center of product experiences everyone benefits.

Over the past two years, we’ve ideated, built and scaled a content platform that respects users. We served tens of billions of pieces of content. We experimented with all content – including advertising. We proved that advertising can be done well while respecting users. We have learned a ton along the way.

Our learnings show that users want content that is relevant, exciting and engaging. We want to deliver that type of content experience to our users, and we know that it will take focus and effort to do that right.

We have therefore made the decision to stop advertising in Firefox through the Tiles experiment in order to focus on content discovery. We want to thank all the partners who have worked with us on Tiles. Naturally, we will fulfill our current commitments as we wind down this experiment over the next few months.

Advertising in Firefox could be a great business, but it isn’t the right business for us at this time because we want to focus on core experiences for our users. We want to reimagine content experiences and content discovery in our products. We will do this work as a fully integrated part of the Firefox team.

We believe that the advertising ecosystem needs to do better – we believe that our work in our advertising experiments has shown that it can be done better. Mozilla will continue to explore ways to bring a better balance to the advertising ecosystem for everyone’s benefit, and to build successful products that respect user privacy and deliver experiences based upon transparency, choice and control.

https://blog.mozilla.org/advancingcontent/2015/12/04/advancing-content/


Mozilla Open Policy & Advocacy Blog: UK IP Bill is a threat to privacy, security, and trust online

Пятница, 04 Декабря 2015 г. 21:36 + в цитатник

The British Government has proposed legislation that would expand the surveillance capabilities of law enforcement and intelligence agencies. The draft omnibus Investigatory Powers Bill purports to modernise and update surveillance law to create a regime that is “fit for the digital age.” But as written, the law would undermine the technological and legal design framework that protects the continued vitality of the Open Internet. It represents a serious threat to open source software, online commerce, and user privacy, security, and trust.

The draft IP bill proposes a broad and dangerous set of surveillance mandates and authorities that threaten privacy and security online. Keeping Internet users safe does not have to cost them their privacy, nor the integrity of communications infrastructure.

As a registered UK company, and as a global community whose mission is to promote openness, innovation, and opportunity on the Web, we shared our concerns with the UK government by submitting commentary to the Science & Technology Committee of the House of Commons on November 27.

Our submission identified 5 serious, non-exhaustive concerns we wish to highlight in the bill:

  • Weakening security: Requirements to undermine encryption that pose a severe threat to trust online and to the effectiveness of the Internet as an engine for our economy and society;
  • Tampering with devices: Bulk equipment interference authorities that could be used to violate the integrity of our products and harm our relationship with our users;
  • Secrecy: Limitations on disclosure that impact our open philosophy and in practice are unworkable for an open source company;
  • Legalising mass surveillance: Bulk interception capabilities that would compromise the privacy of communications; and
  • Data retention: data retention mandates that create unnecessary risk for businesses and users.

Find Mozilla’s full submission to the Science & Technology Committee here.

So what’s the alternative?

Government collection and retention of user data impact trust and openness online. This makes it critical to have a clear and public understanding of the means and limits of surveillance activities – a set of surveillance rules of the road.

The following three principles, derived from the Mozilla Manifesto, attempt to identify those means and limits. They offer a “Mozilla way of thinking” about the complex landscape of government surveillance and law enforcement access. We do not propose a comprehensive list of good or bad government practices, but rather describe the kinds of activities in this space that would protect the underpinnings and integrity of the Web.

  • User Security: Mozilla Manifesto Principle #4 states “Individuals’ security and privacy on the Internet are fundamental and must not be treated as optional.” Governments should act to bolster user security, not to weaken it. Strong and reliable encryption is a key tool in improving user security. Security and privacy go hand-in-hand; you cannot have one without the other.
  • Minimal Impact: Mozilla Manifesto Principle #2 states that the Internet is a global public resource. Government surveillance decisions should take into account global implications for trust and security online by focusing activities on those with minimal impact.
  • Transparency and Accountability: Mozilla Manifesto Principle #8 calls for transparent community-based accountability as the basis for user trust. Because surveillance activities generally are (and inherently must be, to some degree) conducted in secret, independent oversight bodies must be effectively empowered and must communicate with and on behalf of the public to ensure democratic accountability.

Next Steps

Comprehensive reform of this bill will be necessary in order to protect online commerce and the security and privacy of users. Mozilla will continue to follow the process closely, including submitting additional evidence to the Committees in charge of scrutinising the bill.

Currently, the Joint Committee on Human Rights is accepting submissions from stakeholders until 7 December. The main committee to analyse the bill – the Joint Committee on the Investigatory Powers Bill – has also recently announced that it will receive written evidence until 21 December. The committee will then report its findings by 11 February 2016.

As a global community of developers and engineers, Mozilla prides itself on providing secure and open products and services to our users. In our view, the draft Investigatory Powers bill is a missed opportunity to set a strong global standard in reforming surveillance powers, and a harmful step backward for the interests of Internet users and the Internet economy.

At this critical time, it is important that the UK government set a strong standard anchored in the values of privacy and security. We strongly advise the committees to carefully weigh the intended objectives with the consequences for the continued success of UK businesses and the security of users.

Now is the time to contact your representatives in the Committees and make your voice heard. You can learn more and take action through a campaign platform launched by a civil society coalition of UK and international organisations, dontspyonus.co.uk.

https://blog.mozilla.org/netpolicy/2015/12/04/uk-ip-bill-is-a-threat-to-privacy-security-and-trust-online/


Chris H-C: A High Score List You Don’t Want to Top

Пятница, 04 Декабря 2015 г. 20:58 + в цитатник

tryHighScoresForBlog

Firefox is a large and complicated software project. As such, it has a large and complicated build system and a large and complicated suite of tests. These builds and tests are run each time code is pushed to the mozilla-central repository to ensure that nothing obviously wrong makes it into the tree.

This is important. Nothing slows development to a crawl or causes volunteer contributors to leave in droves quite like a codebase that is too broken to develop in.

But wouldn’t it be better to run these builds and tests before the code makes it to mozilla-central and has potentially-disastrous consequences?

That’s where Try come into play.

Try will run any or all of the builds and tests that code getting into mozilla-central would go through, without having to wait until it is pushed to mozilla-central to do it. All you need to access Try is commit level 1.

Sounds great! Why don’t we run every code change through the whole battery of builds and tests to be sure nothing gets missed?

Well, if you’ve ever built Firefox (it’s really easy) you’ll have noticed that it takes some time. Not a lot of time, but some. During that time, your computer is going full tilt trying to get it all done.

Multiply this by 39 build configurations, and you start to see where I’m heading with this.

Running the builds takes a lot of computers a lot of time. Running the tests on top of it only increases the resource requirement.

And this brings me back to the High Scores. Give a bunch of nerds (Mozillians) something countable and we can turn it into points. Give them points and identifiers, and you get a High Scores List.

The more you use Try, the more computer hours you use. The more computer hours you use, the higher your email address rises on the list. Get within sight of the top, and you might just get an email from a Release Engineer asking you why you think you need to build every platform for the documentation typo you fixed.

Faced with the staggering variety of different build platforms and test suites, how do you learn which ones you need to run and which ones you don’t?

Mozillians are friendly, so you ask. And ask. And keep asking until you gain confidence. Then you make a mistake and start asking again.

And every time you ask, you get an answer. Or at least that’s been my experience of working with Mozillians.

:chutten


https://chuttenblog.wordpress.com/2015/12/04/a-high-score-list-you-dont-want-to-top/


Hub Figui`ere: Let's encrypt all the things

Пятница, 04 Декабря 2015 г. 18:00 + в цитатник

Now that letsencrypt is more widely released, I took the opportunity to generate the certificates and install them manual on my hosting. In the future I will flip the switch to force HTTPS here. For now I made sure to avoid mixed-content as much as I could.

This was long overdue.

PS: I forgot to thanks @CorySolovewicz who helped in Twitter with the problem of "invalid" private key.

http://www.figuiere.net/hub/blog/?2015/12/04/855-let-s-encrypt-all-the-things


Support.Mozilla.Org: What’s up with SUMO – 4th December

Пятница, 04 Декабря 2015 г. 16:53 + в цитатник

Hello, SUMO Nation!

More exciting (well, that depends on how easily excited you get) news from the world of SUMO coming your way, live and direct from the internet :-)

Welcome, new contributors!

If you joined us recently, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

We salute you!

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Most recent SUMO Community meeting

The next SUMO Community meeting…

  • …is going to be announced when we know it’s actually happening! Next week we’re on the road, and the week after that will probably see a lot of people AFK, recharging their batteries for the end of the year.
  • Reminder: if you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Monday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.

Developers

Community

  • All of next week most of us will be meeting during Mozlando. On the agenda, among other things….
    • Following up on our meetings and goals set in Whistler
    • Storyboarding the l10n and support forums experience
    • Meeting with the Firefox Sync team
    • L10n style guide creation session
    • Looking into the future – SUMO 2016
    • Building inclusive teams and communities
    • SUMO Product Leads discussion
    • …and possibly more ;-)
    • You can expect a report from all this on the forums (and the blog) some time after we’re all safely back home.
  • The Ivory Coast community is organizing a Mozilla / SUMO event tomorrow! Fingers crossed for a great day for Abbackar & the crew there :-)
  • Meanwhile, Mozillians in Taipei had to reschedule their event to December 19th, but are still going strong – thanks!
  • Reminder: Are you interested in creating more Canned Replies for our Army of Awesome? Click here!
  • Ongoing reminder: if you think you can benefit from getting a second-hand device to help you with contributing to SUMO, you know where to find us.

Support Forum

  • No major updates. Smooth sailing!

Knowledge Base

Localization

  • A warm welcome to Albert, the new Locale Leader for Portuguese, joining Cl'audio and the awesome Portuguese localizers out there!
  • UI localization: the majority of locales from Verbatim have been migrated to Pontoon, while some are going to be using Pootle (when it launches). No issues have been reported so far. Huge thanks to Matjaz & the Pontoon team for making this a very smooth migration! (if you do notice an issue, file a Pontoon bug through here)
  • Milestone announcements are now visible on Localization Dashboards for most locales present on SUMO. If you don’t see any on your dashboard, it most likely means your locale is in a great state and you should be proud of that – thanks!

Firefox

  • for Android
    • Articles for version 43 are almost done.
    • Battery issues reported with several devices may cause a Firefox 42.0.2 release.
  •  for iOS
    • No major updates – keep testing, reporting bugs, answering questions, and generally rocking the iOS with Firefox!
  • Firefox OS
    • No major updates.
Next week, the post may be a bit shorter, but may include some fun photos. We’ll see how that works out ;-) To everyone hitting the road in the coming days: safe travels!
We will see you all soon, have a great weekend.

https://blog.mozilla.org/sumo/2015/12/04/whats-up-with-sumo-4th-december/


Daniel Stenberg: Everything curl – work in progress

Пятница, 04 Декабря 2015 г. 16:05 + в цитатник

everything-curl-cover

… is a book I’m slowly working on. Click the image above to see it in its current state.  It is not complete.

As the title should hint, I intend to cover just about everything that is to say about curl. The project, the products, the development, the source code, its history, its future, the policies, the ideas and whatever else that I can think of has anything to do with curl.

The book is completely open and available for free – in a variety of formats. When I write this, there are about 60 pages and almost 13,000 words written. There are 220+ sections or sub chapters planned (so far) out of which 111 are still to be written. Of course that doesn’t really mean that the 115 already written ones are complete or without flaws that need to be corrected. I also suspect I’ve written the easiest ones first…

I welcome and encourage all the help I can get. The source is all written in markdown, and everything is on github. File issues, send pull-requests or whatever you can think of!

I’m especially interested in getting suggestions for new sections that I haven’t yet thought about. Or sub sections, or examples. Or some fun stories from the wild Internet that you overcame with the help of curl. Or suggestions on where we should insert images (and what images to insert). Or other artworks, like a nicer cover. Anything!

If things go as planned, I have filled in most of the blank by the summer 2016 and can then offer the complete curl book.

http://daniel.haxx.se/blog/2015/12/04/everything-curl-work-in-progress/


QMO: Firefox 44.0 Aurora Testday, December 11th

Пятница, 04 Декабря 2015 г. 14:05 + в цитатник

Hello Mozillians,

We are happy to announce that Friday, December 11th, we are organizing Firefox 44.0 Aurora Testday. We will be focusing our testing on the new Add-ons Signing feature and Async Places Transactions (Bookmarks and History) . Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better! See you on Friday!

https://quality.mozilla.org/2015/12/firefox-44-0-aurora-testday-december-11th/



Поиск сообщений в rss_planet_mozilla
Страницы: 472 ... 222 221 [220] 219 218 ..
.. 1 Календарь