-Поиск по дневнику

Поиск сообщений в rss_planet_mozilla

 -Подписка по e-mail

 

 -Постоянные читатели

 -Статистика

Статистика LiveInternet.ru: показано количество хитов и посетителей
Создан: 19.06.2007
Записей:
Комментариев:
Написано: 7

Planet Mozilla





Planet Mozilla - https://planet.mozilla.org/


Добавить любой RSS - источник (включая журнал LiveJournal) в свою ленту друзей вы можете на странице синдикации.

Исходная информация - http://planet.mozilla.org/.
Данный дневник сформирован из открытого RSS-источника по адресу http://planet.mozilla.org/rss20.xml, и дополняется в соответствии с дополнением данного источника. Он может не соответствовать содержимому оригинальной страницы. Трансляция создана автоматически по запросу читателей этой RSS ленты.
По всем вопросам о работе данного сервиса обращаться со страницы контактной информации.

[Обновить трансляцию]

Carsten Book: Results of the Sheriff Survey

Среда, 01 Апреля 2015 г. 12:50 + в цитатник

Hi,

we closed our Sheriff Survey on Monday and i wanted to share some highlights from the Results. Thanks for taking part in the Survey!

1.Communication with the Sheriffs

We got very good and positive Feedback about the Interaction/Communication with the Sheriffs. We know that backouts are never a good/positive thing and we sheriffs assume always the best intentions – nobody _wants_ to cause bustage, but it happens.

We also noticed a lot of comments of checkin-needed requestors and the hope we have at some time the autolander system (that lands patches automatically). There is work being done on this like as example in https://bugzilla.mozilla.org/show_bug.cgi?id=1128039

 

2. Trychooser and other Feedback

We got comments about trychooser and how this could be improved. That Feedback is very valuable and we will pass that Feedback over to the Releng Folks. For all Feedback and Suggestions we are looking at the survey what we can improve and realize. As example one result is now https://bugzilla.mozilla.org/show_bug.cgi?id=1145836 :)

 

3. Getting Involved!

We got several Community Member with interest in helping out with Sheriffing! Thats really great and we will follow-up here soon. Also its not too late to get involved. Just drop me or the sheriff lists (sheriffs@mozilla.org) a note!

 

4. You can reach us at anytime!

While the Survey is closed now you can still contact us anytime for feedback, questions and when you want to be involved! Just drop us a note at sheriffs@mozilla.org or ping the Sheriff on duty (normally the one with the |sheriffduty tag in #developers on irc.mozilla.org).

Thanks!

 

– Tomcat

https://blog.mozilla.org/tomcat/2015/04/01/results-of-the-sheriff-survey/


Jonas Finnemann Jensen: Playing with Talos in the Cloud

Среда, 01 Апреля 2015 г. 12:30 + в цитатник

As part of my goals this quarter I’ve been experimenting with running Talos in the cloud (Linux only). There are many valid reasons why we’re not already doing this. Conventional wisdom dictates that visualized resources running on hardware shared between multiple users is unlikely to have consistent performance profile, hence, regressions detection becomes unreliable.

Another reason for not running performances tests in the cloud, is that a cloud server is very different from a consumer laptop, and changes in performance characteristic may not reflect the end-user experience.

But when all the reasons for not running performance testing in the cloud have been listed, and I’m sure my list above wasn’t exhaustive. There certainly is some benefits to using the cloud, on-demand scalability and cost immediately springs to mind. So investigating the possibility of running Talos in the cloud is interesting, if not thing more it could be used for fast smoke tests.

Comparing Consistency of Instance Types

First thing to evaluate is the consistency of results depending on instance-type, cloud provider and configuration. For the purpose of these experiments I have chosen instances and cloud providers:

  • AWS EC2 (m3.medium, m3.xlarge, m3.2xlarge, c4.large, c4.xlarge, c4.2xlarge, c3.large, c3.xlarge, c3.2xlarge, r3.large, r3.xlarge, g2.2xlarge)
  • Azure (A1, A2, A3, A4, D1, D2, D3, D4)
  • Digital Ocean (1g-1cpu, 2g-2cpu, 4g-2cpu, 8g-4cpu)

For AWS I tested instances in both us-east-1 and us-west-1 to see if there was any difference of results. In each case I have been using two revisions c448634fb6c9 which doesn’t have any regressions and fe5c25b8b675 which has clear regressions in test suites cart and tart. In each case I also ran the tests with both xvfb and xorg configured with dummy video and input drivers.

To ease deployment and ensure that I was using the exact same binaries across all instances I packaged Talos as a docker image. This also ensured that I could reset the test environment after each Talos invocation. Talos was invoked to run as many of the test suites as I could get working, but for the purpose of this evaluation I’m only considering results from the following suites:

  • tp5o,
  • tart,
  • cart,
  • tsvgr_opacity,
  • tsvgx,
  • tscrollx,
  • tp5o_scroll, and
  • tresize

After running all these test suites for all the configurations of instance type, region and display server enumerated above, we have a lot of data-points on the form results(cfg, rev, case) = (r1, r2, ..., rn), where ri is the measurement from the i’th iteration of the Talos test case case.

To compare all this data with the aim of ranking configurations by the consistency of their results, compute rank(cfg, rev, case) as the number of configurations cfg' where rank(cfg', rev, case) < rank(cfg, rev, case). Informally, we sort configurations based lowest standard deviation for a given case and rev and the index of a configuration in that list is the rank rank(cfg, rev, case) of the configuration for the given case and rev.

We then finally list configurations by score(cfg), which we compute as the mean of all ranks for the given configuration. Formally we write:

score(cfg) = mean({rank(cfg, rev, case) | for all rev, case})

Credits for this methodology goes to Roberto Vitillo, who also suggested using trimmed mean, but as it turns out the ordering is pretty much the same.

When listing configurations by score as computed above we get the following ordered lists of configurations. Notice that the score is strictly relative and doesn’t really say much. The interesting aspect is the ordering.

Warning, the score and ordering has nothing to do with performance. This strictly considers consistency of performance from a Talos perspective. This is not a comparison of cloud performance!

Provider:       InstanceType:   Region:     Display:  Score:
aws,            c4.large,       us-west-1,  xorg,     11.04
aws,            c4.large,       us-west-1,  xvfb,     11.43
aws,            c4.2xlarge,     us-west-1,  xorg,     12.46
aws,            c4.large,       us-east-1,  xorg,     13.24
aws,            c4.large,       us-east-1,  xvfb,     13.73
aws,            c4.2xlarge,     us-west-1,  xvfb,     13.96
aws,            c4.2xlarge,     us-east-1,  xorg,     14.88
aws,            c4.2xlarge,     us-east-1,  xvfb,     15.27
aws,            c3.large,       us-west-1,  xorg,     17.81
aws,            c3.2xlarge,     us-west-1,  xvfb,     18.11
aws,            c3.large,       us-west-1,  xvfb,     18.26
aws,            c3.2xlarge,     us-east-1,  xvfb,     19.23
aws,            r3.large,       us-west-1,  xvfb,     19.24
aws,            r3.large,       us-west-1,  xorg,     19.82
aws,            m3.2xlarge,     us-west-1,  xvfb,     20.03
aws,            c4.xlarge,      us-east-1,  xorg,     20.04
aws,            c4.xlarge,      us-west-1,  xorg,     20.25
aws,            c3.large,       us-east-1,  xorg,     20.47
aws,            c3.2xlarge,     us-east-1,  xorg,     20.94
aws,            c4.xlarge,      us-west-1,  xvfb,     21.15
aws,            c3.large,       us-east-1,  xvfb,     21.25
aws,            m3.2xlarge,     us-east-1,  xorg,     21.67
aws,            m3.2xlarge,     us-west-1,  xorg,     21.68
aws,            c4.xlarge,      us-east-1,  xvfb,     21.90
aws,            m3.2xlarge,     us-east-1,  xvfb,     21.94
aws,            r3.large,       us-east-1,  xorg,     25.04
aws,            g2.2xlarge,     us-east-1,  xorg,     25.45
aws,            r3.large,       us-east-1,  xvfb,     25.66
aws,            c3.xlarge,      us-west-1,  xvfb,     25.80
aws,            g2.2xlarge,     us-west-1,  xorg,     26.32
aws,            c3.xlarge,      us-west-1,  xorg,     26.64
aws,            g2.2xlarge,     us-east-1,  xvfb,     27.06
aws,            c3.xlarge,      us-east-1,  xvfb,     27.35
aws,            g2.2xlarge,     us-west-1,  xvfb,     28.67
aws,            m3.xlarge,      us-east-1,  xvfb,     28.89
aws,            c3.xlarge,      us-east-1,  xorg,     29.67
aws,            r3.xlarge,      us-west-1,  xorg,     29.84
aws,            m3.xlarge,      us-west-1,  xvfb,     29.85
aws,            m3.xlarge,      us-west-1,  xorg,     29.91
aws,            m3.xlarge,      us-east-1,  xorg,     30.08
aws,            r3.xlarge,      us-west-1,  xvfb,     31.02
aws,            r3.xlarge,      us-east-1,  xorg,     32.25
aws,            r3.xlarge,      us-east-1,  xvfb,     32.85
mozilla-inbound-non-pgo,                              35.86
azure,          D2,                         xvfb,     38.75
azure,          D2,                         xorg,     39.34
aws,            m3.medium,      us-west-1,  xvfb,     45.19
aws,            m3.medium,      us-west-1,  xorg,     45.80
aws,            m3.medium,      us-east-1,  xvfb,     47.64
aws,            m3.medium,      us-east-1,  xorg,     48.41
azure,          D3,                         xvfb,     49.06
azure,          D4,                         xorg,     49.89
azure,          D3,                         xorg,     49.91
azure,          D4,                         xvfb,     51.16
azure,          A3,                         xorg,     51.53
azure,          A3,                         xvfb,     53.39
azure,          D1,                         xorg,     55.13
azure,          A2,                         xvfb,     55.86
azure,          D1,                         xvfb,     56.15
azure,          A2,                         xorg,     56.29
azure,          A1,                         xorg,     58.54
azure,          A4,                         xorg,     59.05
azure,          A4,                         xvfb,     59.24
digital-ocean,  4g-2cpu,                    xorg,     61.93
digital-ocean,  4g-2cpu,                    xvfb,     62.29
digital-ocean,  1g-1cpu,                    xvfb,     63.42
digital-ocean,  2g-2cpu,                    xorg,     64.60
digital-ocean,  1g-1cpu,                    xorg,     64.71
digital-ocean,  2g-2cpu,                    xvfb,     66.14
digital-ocean,  8g-4cpu,                    xvfb,     66.53
digital-ocean,  8g-4cpu,                    xorg,     67.03

You may notice that the list above also contains the configuration mozilla-inbound-non-pgo which has results from our existing infrastructure. It is interesting to see that instances with high CPU exhibits lower standard deviation. This could be because their average run-time is lower, so the standard deviation is also lower. It could also be because they consist of more high-end hardware, SSD disks, etc. Higher CPU instances could also be producing better results because they always have CPU time available.

However, it’s interesting that both Azure and Digital Ocean instances appears to produce much less consistent results. Even their high-performance instances. Surprisingly, the data from mozilla-inbound (our existing infrastructure) doesn’t appear to be very consistent. Granted that could just be a bad run, we would need to try more revisions to say anything conclusive about that.

Unsurprisingly, it doesn’t really seem to matter what AWS region we use, which is nice because it just makes our lives that much simpler. Nor does the choice between xorg or xvfb seem to have any effect.

Comparing Consistency Between Instances

Having identified the Amazon c4 and c3 instance-types, as the most consistent classes, we now proceed to investigate if results are consistent when they are computed using difference instances of the same type. It’s well known that EC2 has bad apples (individual machines that perform badly), but this is a natural thing in any large setting. What we are interested in here is what happens when we compare results different instances.

To do this we take the two revisions c448634fb6c9 which doesn’t have any regressions and fe5c25b8b675 which does have a regression in cart and tart. We run Talos tests for both revisions on 30 instances of the same type. For this test I’ve limited the instance-types under consideration to c4.large and c3.large.

After running the tests we now have results on the form results(cfg, inst, rev, suite, case) = (r1, r2, ... rn) where ri is the result from the i’th iteration of the given test case under the given test suite, revision, configuration and instance. In the previous section we didn’t care which suite the test case belonged to. We care about suite relationship here because we compute the geometric mean of the medians of all test cases per suite. Formally we write:

score(cfg, inst, rev, suite) = geometricMean({median(results(cfg, inst, rev, suite, case)) | for all case})

Credits to Joel Maher for helping figure out how the current infrastructure derives per suite performance score for a given revision.

We then plot the scores for all instances as two bar-chart series one for each revision. We get the following plots. I’ve only included 3 here for brevity. Each pair of bars is results from one instance on different revisions, the ordering here is not relevant.

From these two plots it’s easy to see that there is a there is a tart regression. Clearly we can also see that performance characteristics does vary between instances. Even in the case of tart it’s evident, but it’s still easy to see the regression.

Now when we consider the chart for tresize it’s very clear that performance is different between machines. And if a regression here was small, it would be hard to see. Most of the other charts are somewhat similar, I’ve posted a link to all of them below along with references to the very sketchy scripts and hacks I’ve employed to run these tests.

Next Steps

While it’s hard to conclude anything definitive without more data. It seems that the C4 and C3 instance-types offers fairly consistent result. I think the next step is to setup a subset of Talos tests running silently along side existing tests while comparing results to regressions observed elsewhere.

Hopefully it should be possible to use a small subset of Talos tests to detect some regressions early. Rather than having all Talos regressions detected 12 pushes later. Setting this up is not going to a Q2 goal for me, but I should be able to set it up on TaskCluster in no time. At this point I think it’s mostly a configuration issue, since I already have Talos running under docker.

The hard part is analyzing the resulting data and detect regressions based on it. I tried comparing results with approaches like students t-tests. But there is still noisy tests that have to be filtered out, although preliminary findings were promising. I suspect it might be easiest to employ some naive form of Machine learning, and hope that magically solves everything. But we might not have enough training data.

http://jonasfj.dk/blog/2015/04/playing-with-talos-in-the-cloud/


Benoit Girard: Image Decoding on the GPU now in Nightly

Среда, 01 Апреля 2015 г. 11:47 + в цитатник

This was posted on April 1st as an April Fools’ Day hoax.

In 2013-2014 a lot of effort was put into moving image decoding to a background thread. However it became obvious that doing parallel off-main-thread was still the critical path for presenting image heavy pages. The biggest problem we faced was that on B2G keeping active image uncompressed in main memory was something we simply could not afford with a 128 MB device even if it was just for visible images.

Enter image decoding on the GPU. The goal of image decoding is use the GPU to parallelize the decoding of each visible (and only the visible) -pixels- instead of just getting per image parallelization and doing full image decodes. However the biggest advantage comes from the reduced GPU upload bandwidth from being able to upload a compressed texture instead of a large 32-bit RGB bitmap.

We first explored using s3tc compressed textures. However this required us to still decode the image and re-compressing the image to s3tc on the CPU thus regressing page load times.

The trick we ending up doing instead was providing a texture that was the -raw- JPEG stream encoder as a -much- smaller RGB texture plane. Using a clever shader we sample from the compressed JPEG stream when compositing the texture to the frame buffer. This means that we don’t ever have to fit the uncompressed texture in main memory. This means that on pages that would normally cause a memory usage spike leading to an OOM no longer have any memory spike at all.

GPU Image Decoding

GPU Image Decoding

The non trivial bit was designing a shader that can sample from a JPEG texture and composite the decompressed results on the fly without any GPU driver modification. We bind a 3d LUT texture to the second texture unit to perform some approximations when doing the DCT lookup to speed up the shader units, this requires a single 64KB lookup 3D texture that is shared for the whole system. The challenging part of this project however is taking the texture coordinate S&T and looking up the relevant DCT in the JPEG stream. Since the JPEG stream uses a huffman encoding it’s not trivial to map (x, y) coordinate from the decompressed image to a position on the stream. For the lookup our technique uses the work of D. Charles et al.


https://benoitgirard.wordpress.com/2015/04/01/image-decoding-on-the-gpu-now-in-nightly/


Joshua Cranmer: Breaking news

Среда, 01 Апреля 2015 г. 10:00 + в цитатник
It was brought to my attention recently by reputable sources that the recent announcement of increased usage in recent years produced an internal firestorm within Mozilla. Key figures raised alarm that some of the tech press had interpreted the blog post as a sign that Thunderbird was not, in fact, dead. As a result, they asked Thunderbird community members to make corrections to emphasize that Mozilla was trying to kill Thunderbird.

The primary fear, it seems, is that knowledge that the largest open-source email client was still receiving regular updates would impel its userbase to agitate for increased funding and maintenance of the client to help forestall potential threats to the open nature of email as well as to innovate in the space of providing usable and private communication channels. Such funding, however, would be an unaffordable luxury and would only distract Mozilla from its central goal of building developer productivity tooling. Persistent rumors that Mozilla would be willing to fund Thunderbird were it renamed Firefox Email were finally addressed with the comment, "such a renaming would violate our current policy that all projects be named Persona."

http://quetzalcoatal.blogspot.com/2015/04/breaking-news.html


Yunier Jos'e Sosa V'azquez: Nueva versi'on de Firefox a~nade m'as seguridad

Среда, 01 Апреля 2015 г. 06:35 + в цитатник

Firefox Update

En el d'ia de hoy Mozilla ha liberado un nuevo Firefox y ya lo puedes descargar desde nuestra zona de Descargas.

Como dice en el t'itulo, destacan entre las novedades la adici'on de funcionalidades que te har'an navegar m'as seguro. Entre ellas puedes encontrar:

  • De forma oportuna se cifra el tr'afico HTTP cuando el servidor soporta HTTP/2 AltSvc.
  • Mejorada la protecci'on contra la suplantaci'on del sitio a trav'es de revocaci'on de certificados centralizado OneCRL.
  • Deshabilitada la versi'on insegura de TLS para la seguridad del sitio.
  • El reporte de errores en SSL se ha extendido para informar de errores no certificados.
  • Mejora del certificado y la seguridad en las comunicaciones TLS mediante la eliminaci'on del soporte a DSA.

Desde ahora en adelante Firefox en turco utiliza Yandex como buscador por defecto y Bing usa HTTPS para b'usquedas seguras.

Los desarrolladores ahora pueden depurar las pesta~nas abiertas en Chrome para escritorios y m'oviles, y Safari para iOS gracias al proyecto Valence, depurar desde el panel las URIs chrome:// y about:// y se ha a~nadi'o soporte a la propiedad CSS display:contents.

Mientras tanto, Firefox para Android ahora habla Alban'es [sq], Burmese [my], Lower Sorbian [dsb], Songhai [son], Upper Sorbian [hsb], Uzbeko [uz]

Si deseas conocer m'as, puedes leer las notas de lanzamiento (en ingl'es).

Aclaraci'on para la versi'on m'ovil.

En las descargas se pueden encontrar 3 versiones para Android. El archivo que contiene i386 es para los dispositivos que tengan la arquitectura de Intel. Mientras que en los nombrados arm, el que dice api11 funciona con Honeycomb (3.0) o superior y el de api9 es para Gingerbread (2.3).

Puedes obtener esta versi'on desde nuestra zona de Descargas en espa~nol e ingl'es para Linux, Mac, Windows y Android. Recuerda que para navegar a trav'es de servidores proxy debes modificar la preferencia network.negotiate-auth.allow-insecure-ntlm-v1 a true desde about:config.

http://firefoxmania.uci.cu/nueva-version-de-firefox-anade-mas-seguridad/


Christian Heilmann: Redact.js – having 60FPS across devices made simple

Среда, 01 Апреля 2015 г. 02:27 + в цитатник

A lot of people are releasing amazing frameworks these days, so I thought I should have a go at an opinionated micro framework, too.

Redact.js logo

Redact.js allows you to have really fast performing JS apps across devices and on Desktop and Mobile. The framework is only a few bytes and uses gulp to get minified and ready to use.

The main trick is to avoid HTML rendering before the user interacts with it. In many cases, this is just by error as some JS wasn’t loaded. I thought, however, why not grab this opportunity?

Read more about Redact.js on GitHub and download the source to use it in your own solutions.

Built with love in London, England, where it is now 00:27 on April 1st.

http://christianheilmann.com/2015/04/01/redact-js-having-60fps-across-devices-made-simple/


Justin Crawford: Experiments: Services

Среда, 01 Апреля 2015 г. 02:00 + в цитатник

The vision for our Services products to bring the power of MDN directly into professional web developers’ daily coding environments. Experiments in this area will take the form of web services built iteratively. Each iteration should either attract enthusiastic users or provide market insight that helps guide the next iteration.

In addition to exploring the market for developer services, these experiments will also explore new architectures, form factors and contribution pathways for MDN’s information products.

Four services have been identified for exploration so far.

1. Compatibility Data service

The compatibility data service (a.k.a. Browsercompat.org) is a read/write API intended to replace the tables of compatibility data that currently accompany many features in MDN’s reference documentation. The project is justified for maintenance reasons alone: Unstructured compatibility data on MDN is very difficult to keep current because it requires editors to maintain every page (in every language) where identical data might appear. It offers a fantastic opportunity to answer several questions likely to recur in MDN’s future evolution:

  • Can we maintain so-called “micro-services” without creating significant engineering overhead?
  • Can we build reasonable contribution pathways and monitoring processes around structured data residing in a separate data store?
  • Is the effort involved in “decomposing” MDN’s wiki content into structured APIs justified by the improvement in reusability such services provide?

These questions are essential to understand as we move toward a future where data are increasingly structured and delivered contextually.

Status of this experiment: Underway and progressing. Must achieve several major milestones before it can be assessed.

2. Security scanning service

In surveys run in Q4 2014 and Q12015, a large number of MDN visitors said they currently do not use a security scanning service but are motivated to do so. This experiment will give many more web developers access to security scanning tools. It will answer these questions:

  • Can we disrupt the security scanning space with tools aimed at individual web developers?
  • Can we help more web developers make their web sites more secure by providing services in a more familiar form factor?
  • Is there value in releasing services for web developers under the MDN brand?

Status of this experiment: Underway and progressing toward an MVP release. Must achieve several major milestones before it can be assessed.

3. Compatibility scanning service

In the surveys mentioned above, a large number of MDN visitors said they currently do not use a compatibility scanning tool but are motivated to do so. This experiment will build such a tool using a variety of existing libraries. It will answer these questions:

  • Are web developers enthusiastic about using a tool that promises to make their web sites more compatible across devices?
  • What form factor is most effective?
  • Can we successfully create automation from MDN’s information products and contribution workflows?

Status of this experiment: MVP planned for Q2/Q3 2015.

4. Accessibility scanning service

Also in the surveys mentioned above, a large number of MDN visitors said they currently do not scan for accessibility but are motivated to do so. This experiment will build an accessibility scanning service that helps answer the questions above, as well as:

  • If the tool fits into their workflow, will more developers make their web sites more accessible?

Status of this experiment: MVP planned for Q2/Q3 2015.

The market success of any of the latter three services would make possible an additional experiment:

5. Revenue

Professional web developers are accustomed to paying for services that increase their capacity to deliver high-quality professional results. The success of such services as Github, Heroku, NewRelic and many others is evidence of this.

MDN services that bring the high quality of MDN into professional web developers’ workflows may be valuable enough to generate revenue for Mozilla. This possibility depends on a number of important milestones before it is feasible, such as…

  • Market demand for services built
  • Community discussion about paid services under the MDN banner
  • Analysis of appropriate pricing and terms
  • Integration with payment systems

In other words, this cannot happen until services prove themselves valuable. Meanwhile, simply discussing it is an experiment: Is the possibility of MDN generating revenue with valuable developer-facing services conceivable?

http://hoosteeno.com/2015/03/31/experiments-services/


Justin Crawford: Experiments: Reference

Среда, 01 Апреля 2015 г. 02:00 + в цитатник

The vision of MDN’s Reference product is to use the power of MDN to build the most accessible, authoritative source of information about web standard technologies for web developers. Accomplishing this vision means optimizing and improving on the product’s present success.

Optimization requires measurement, and MDN’s current measurements need improvement. Below I describe two measurement improvements underway, plus a few optimization experiments:

1. Helpfulness Ratings

Information quality is the essential feature of any reference, but MDN currently does not implement direct quality measures. Bug 1032455 hypothesizes that MDN’s audience would provide qualitative feedback that will help measure and improve MDN’s content quality. But qualitative feedback is a new feature on MDN that we need to explore. Comment 37 on that bug suggests that we use a 3rd-party “micro-survey” widget to help us understand how to get the most from this mechanism before we implement it in our own codebase. The widget will help us answer these critical questions:

  • How can we convince readers to rate content? (We can experiment with different calls to action in the widget.)
  • How do we make sense of ratings? (We can tune the questions in the widget until their responses give us actionable information.)
  • How can we use those ratings to improve content? (We can design a process that turns good information gleaned from the widget into a set of content improvement opportunities; we can solicit contributor help with those opportunities.)
  • How will we know it is working? (We can review revisions before and after the widget’s introduction; our own qualitative assessment should be enough to validate whether a qualitative feedback mechanism is worth more investment.)

If the 3rd-party widget and lightweight processes we build around it make measurable improvements, we may wish to invest more heavily into…

  • a proprietary micro-survey tool
  • dashboards for content improvement opportunities
  • integration with MDN analytics tools

Status of this experiment: MDN’s product council has agreed with the proposal and vendor review bugs for the 3rd party tool are filed.

2. Metrics Dashboard
In an earlier post I depicted the state of MDN’s metrics with this illustration:

metrics_status

The short summary of this is, MDN has not implemented sufficient measures to make good data-driven decisions. MDN doesn’t have any location to house most of those measurements. Bug 1133071 hypothesizes that creating a place to visualize metrics will help us identify new opportunities for improvement. With a metrics dashboard we can answer these questions:

  • What metrics should be on a metrics dashboard?
  • Who should have access to it?
  • What metrics are most valuable for measuring the success of our products?
  • How can we directly affect the metrics we care about?

Status of this experiment: At the 2015 Hack on MDN meetup, this idea was pitched and undertaken. A pull request attached to bug 973612 includes code to extract data from the MDN platform and add it to Elasticsearch. Upcoming bugs will create periodic jobs to populate the Elasticsearch index, create a Kibana dashboard for the data and add it (via iframe) to a page on MDN.

3. Social Sharing
For user-generated content sites like MDN, social media is an essential driver of traffic. People visiting a page may be likely to share the page with their social networks and those shares will drive more traffic to MDN. But MDN lacks a social sharing widget (among other things common to user-generated content sites):

feature_statusBug 875062 hypothesizes that adding a social sharing widget to MDN’s reference pages could create 20 times more social sharing than MDN’s current average. Since that bug was filed MDN saw some validation of this via the Fellowship page. That page included a social sharing link at the bottom that generated 10 times as many shares as MDN’s average. This experiment will test social sharing and answer questions such as…

  • What placement/design is the most powerful?
  • What pages get the most shares and which shares get the most interaction?
  • Can we derive anything meaningful from the things people say when they share MDN links?

Status of this experiment: The code for social sharing has been integrated into the MDN platform behind a feature flag. Bug 1145630 proposes to split-test placement and design to determine the optimal location before final implementation.

4. Interactive Code Samples

Popular online code sandboxes like Codepen.io and JSFiddle let users quickly experiment with code and see its effects. Some of MDN’s competitors also implement such a feature. Surveys indicate that MDN’s audience considers this a gap in MDN’s features. Anecdotes indicate that learners consider this feature essential to learning. Contributors also might benefit from using a code sandbox for composing examples since such tools provide validation and testing opportunities.

These factors suggest that MDN should implement interactive code samples, but they imply a multitude of use cases that do not completely overlap. Bug 1148743 proposes to start with a lightweight implementation serving one use case and expand to more as we learn more. It will create a way for viewers of a code sample in MDN to open the sample in JSFiddle. This experiment will answer these questions:

  • Do people use the feature?
  • Who uses it?
  • How long do they spend tinkering with code in the sandbox?
  • Was it helpful to them?

The 3rd party widget required for the Helpfulness Ratings experiment can power the qualitative assessment necessary to know how this feature performs with MDN’s various audiences. If it is successful, future investment in this specific approach (or another similar approach) could…

  • Allow editors of a page to open samples in JSFiddle from the editing interface
  • Allow editors of a sample to save it to an MDN page
  • Create learning exercises that implement the sandbox

Status of this experiment: A pull request attached to Bug 1148743 will make this available for testing by administrators.

5. Akismet spam integration

Since late 2014 MDN has been the victim of a persistent spam attack. Triaging this spam is a constant vigil for MDN contributors and staff. Most of the spam is blatant: It seems likely that a heuristic spam detection application could spare the human triage team some work. Bug 1124358 hypothesizes that Akismet, a popular spam prevention tool, might be up to the task. Implementing this bug will answer just one question:

  • Can Akismet accurately flag spam posts like the ones MDN’s triage team handles, without improperly flagging valid content?

Status of this experiment: Proposed. MDN fans and contributors with API development experience are encouraged to reach out!

http://hoosteeno.com/2015/03/31/experiments-reference/


Monica Chew: Two Short Stories about Tracking Protection

Среда, 01 Апреля 2015 г. 01:00 + в цитатник
Here are two slide decks I made about why online tracking is a privacy concern, and a metaphor for how tracking works.

http://monica-at-mozilla.blogspot.com/2015/03/two-short-stories-about-tracking.html


Nathan Froyd: tsan bug finding update

Вторник, 31 Марта 2015 г. 23:33 + в цитатник

At the beginning of Q1, I set a goal to investigate races with Thread Sanitizer and to fix the “top” 10 races discovered with the tool.  Ten races seemed like a conservative number; we didn’t know how many races there were, their impact, or how difficult fixing them would be.  We also weren’t sure how much participation we could count on from area experts, and it might have turned out that I would spend a significant amount of time figuring out what was going on with the races and fixing them myself.

I’m happy to report that according to Bugzilla, nearly 30 of the races reported this quarter have been fixed.  Folks in the networking stack, JavaScript GC, JavaScript JIT, and ImageLib have been super-responsive in addressing problems.  There are even a few bugs in the afore-linked query that have been fixed by folks on the WebRTC team running TSan or similar thread-safety tools (Clang’s Thread Safety Analysis, to be specific) to detect bugs themselves, which is heartening to see.  And it’s also worth mentioning that at least one of the unfixed DOM media bugs detected by TSan has seen some significant threadsafety refactoring work in its dependent bugs.  All this attention to data races has been very encouraging.

I plan on continuing the TSan runs in Q2 along with getting TSan-on-Firefox working with more-or-less current versions of Clang.  Having to download specific Subversion revisions of the tools or precompiled Clang 3.3 (!) binaries to make TSan work is discouraging, and that could obviously work much better.

https://blog.mozilla.org/nfroyd/2015/03/31/tsan-bug-finding-update/


Chris Cooper: The changing face of buildduty

Вторник, 31 Марта 2015 г. 20:22 + в цитатник

Buildduty is the friendly face of Mozilla release engineering that contributors see first. Whether you need a production machine to debug a failure, your try server push is slow, or hg is on the fritz, we’re the ones who dig in, help, and find out why. The buildduty role is almost entirely operational and interrupt-driven: we respond to requests as they come in.

We think it’s important for everyone in release engineering to rotate through the role so they can see how the various systems interact (and fail!) in production. It also allows them to forge the relationships with other teams — developers, sheriffs, release management, developer services, IT — necessary to fix failures when the occur. The challenge has been finding a suitable tenure for buildduty that allows us to make quantifiable improvements in the process.

Originally the tenure for buildduty was one week. This proved to be too short, and often conflicted with other work. Sometimes a big outage would make it impossible to tackle any other buildduty tasks in a given week. Some people were more conscientious than others about performing all of the buildduty tasks. Work tended to pile up until one of those conscientious people cycled through buildduty again. There were enough people on the team that each person might not be on buildduty more than once a quarter. One week was not long enough to become proficient at any of the buildduty tasks. We made almost no progress on process during this time, and our backlog of work grew.

In September of last year, we made buildduty a quarter-long (3 month) commitment. This made it easy to plan quarterly goals for the people involved in buildduty, but also proved hard to swallow for release engineers who were more used to doing development work than operational work. 3 months was too long, and had the potential to burn people out.

One surprising development was that even though the duration of buildduty was longer, it didn’t necessarily translate to process improvements. The volume of interrupts has been quite high over the past 6 months, so despite some standout innovations in machine health monitoring and reconfig automation, many buildduty focus areas still lack proper tooling.

Now we’re trying something different.

For the next 3 months, we’ve changed the buildduty tenure to be one month. This will allow more release engineers to rotate through the position more quickly, but hopefully still give them each enough time in the role to become proficient.

To address the tooling deficiency, we also created an adjunct role called “buildduty tools.” The buildduty person from one month will automatically rotate into the buildduty tools role for the following month. While in the buildduty tools role, you assist the front-line buildduty person as required, but primarily you write tools or fix bugs that you wish had existed when you were doing the front-line support the month before.

Hopefully this will prove to be the “Goldilocks” zone for buildduty.

Without further ado, here’s the buildduty schedule for Q2:

  • April: Massimo, with Callek in buildduty tools
  • May 1-22: Selena, with Massimo in buildduty tools
  • May 25-June 19: Kim, with Selena in buildduty tools
  • June 22-30: me. I’m not going to Whistler, so I’ll be back-stopping buildduty while everyone else is in BC.

This is also in the shared buildduty Google calendar.

Callek will be covering afternoons PT for Massimo in April because, honestly, that’s when most of the action happens anyway, and it would be irresponsible to not have coverage during that part of the day.

Massimo starts his buildduty tenure tomorrow. It’s hard but rewarding work. Please be gentle as he finds his feet.

http://coop.deadsquid.com/2015/03/the-changing-face-of-buildduty/


Air Mozilla: Martes mozilleros

Вторник, 31 Марта 2015 г. 19:00 + в цитатник

Martes mozilleros Reuni'on bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos.

https://air.mozilla.org/martes-mozilleros-20150331/


The Mozilla Blog: New Firefox Releases Now Available

Вторник, 31 Марта 2015 г. 18:45 + в цитатник

New versions of Firefox for Windows, Mac, Linux and Android are now available to update or download. For more info on what’s new in Firefox, please see the release notes for Firefox and Firefox for Android.

https://blog.mozilla.org/blog/2015/03/31/new-firefox-releases-now-available/


Mozilla Science Lab: New Workshop on Negative Results in e-Science

Вторник, 31 Марта 2015 г. 18:00 + в цитатник

This guest post is by Ketan Maheshwari, Daniel S. Katz,  Justin Wozniak, Silvia Delgado Olabarriaga, and Douglas Thain on the ERROR Conference, 3 September 2015.

Introduction

Edison performed 10,000 failed experiments before successfully creating the long-lasting electrical light bulb. While Edison meticulously kept a list of failed experiments, a wider dissemination of earlier failures might have led to a quicker invention of the bulb and related technologies. Scientists learn a great deal from their own mistakes, as well as from the mistakes of others.  The pervasive use of computing in science, or “e-science,” is fraught with complexity and is extremely sensitive to technical difficulties, leading to many missteps and mistakes. Our new workshop intends to treat this as a first-class problem, by focusing on the hard cases where computing broke down. We believe that these computational processes or experiment that yielded negative results can be a source of information for others to learn from.

Why it’s time for this workshop

  1. Publicizing negative results leads to quicker and more critical evaluation of new techniques, tools, technologies, and ideas by the community.
  2. Negative results and related issues are real and happen frequently. A publication bias towards positive results hurts progress since not enough people learn from these experiences.
  3. We want to get something valuable out of failed experiments and processes. This redeems costs, time and agony. Analysis of these failures helps narrow down possible causes and hastens progress.
  4. We want to promote a culture of accepting, analyzing, communicating and learning from negative results.

The ERROR Workshop

The 1st E-science ReseaRch leading tO negative Results (ERROR) workshop (https://press3.mcs.anl.gov/errorworkshop), to be held in conjunction with the 11th IEEE International Conference on eScience (http://escience2015.mnm-team.org) on 3 September 2015 in Munich, Germany will provide the community a dedicated and active forum for exchanging cross-discipline experiences in research leading to negative results.

The ERROR workshop aims at providing a forum for researchers who have invested significant efforts in a piece of work that failed to bear the expected fruit. The workshop will provide the community a dedicated and active forum for exchanging cross-discipline experiences in research leading to negative results. The focus is on various aspects of negative results, such as premises and assumptions made, divergence between expected and actual outcomes, possible causes and remedial actions to avoid or prevent such situations, and possible course corrections. Both applications and systems areas are covered, including topics in research methodology, reproducibility, the applications/systems interface, resilience, fault tolerance and social problems in computational science, and other relevant areas. We invite original work in an accessible format of 8 pages.

http://mozillascience.org/new-workshop-on-negative-results-in-e-science/


Rail Aliiev: Taskcluster: First Impression

Вторник, 31 Марта 2015 г. 15:47 + в цитатник

Good news. We decided to redesign Funsize a little and now it uses Taskcluster!

The nature of Funsize is that we may start hundreds of jobs at the same time, then stop sending new jobs and wait for hours. In other words, the service is very bursty. Elastic Beanstalk is not ideal for this use case. Scaling up and down very fast is hard to configure using EB-only tools. Also, running zero instances is not easy.

I tried using Terraform, Cloud Formation and Auto Scaling, but they were also not well suited. There were too many constrains (e.g. Terraform doesn't support all needed AWS features) and they required considerable bespoke setup/maintenance to auto-scale properly.

The next option was Taskcluster, and I was pleased that its design fitted our requirements very well! I was impressed by the simplicity and flexibility offered.

I have implemented a service which consumes Pulse messages for particular buildbot jobs. For nightly builds, it schedules a task graph with three tasks:

  • generate a partial MAR
  • sign it (at the moment a dummy task)
  • publish to Balrog

All tasks are run inside Docker containers which are published on the docker.com registry (other registries can also be used). The task definition essentially comprises of the docker image name and a list of commands it should run (usually this is a single script inside a docker image). In the same task definition you can specify what artifacts should be published by Taskcluster. The artifacts can be public or private.

Things that I really liked

  • Predefined task IDs. This is a great idea! There is no need to talk to the Taskcluster APIs to get the ID (or multiple IDs for task graphs) nor need to parse the response. Fire and forget! The task IDs can be used in different places, like artifact URLs, dependant tasks, etc.
  • Task graphs. This is basically a collection of tasks that can be run in parallel and can depend on each other. This is a nice way to declare your jobs and know them in advance. If needed, the task graphs can be extended by its tasks (decision tasks) dynamically.
  • Simplicity. All you need is to generate a valid JSON document and submit it using HTTP API to Taskcluster.
  • User defined docker images. One of the downsides of Buildbot is that you have a predefined list of slaves with predefined environment (OS, installed software, etc). Taskcluster leverages Docker by default to let you use your own images.

Things that could be improved

  • Encrypted variables. I spent 2-3 days fighting with the encrypted variables. My scheduler was written in Python, so I tried to use a half dozen different Python PGP libraries, but for some reason all of them were generating an incompatible OpenPGP format that Taskcluster could not understand. This forced me to rewrite the scheduling part in Node.js using openpgpjs. There is a bug to address this problem globally. Also, using ISO time stamps would have saved me hours of time. :)
  • It would be great to have a generic scheduler that doesn't require third party Taskcluster consumers writing their own daemons watching for changes (AMQP, VCS, etc) to generate tasks. This would lower the entry barrier for beginners.

Conclusion

There are many other things that can be improved (and I believe they will!) - Taskcluster is still a new project. Regardless of this, it is very flexible, easy to use and develop. I would recommend using it!

Many thanks to garndt, jonasfj and lightsofapollo for their support!

http://rail.merail.ca/posts/taskcluster-first-impression.html


Justin Crawford: MDN Product Talk: Vision

Вторник, 31 Марта 2015 г. 15:20 + в цитатник

As I wrote this post, MDN’s documentation wiki hit a remarkable milestone: For the first time ever MDN saw more than 4 million unique visitors in a single month.

I always tell people, if we have a good quarter it’s because of the work we did three, four, and five years ago. It’s not because we did a good job this quarter.

- Jeff Bezos

http://hoosteeno.com/2015/03/31/mdn-product-talk-vision/


Mark Surman: Building an Academy

Вторник, 31 Марта 2015 г. 13:14 + в цитатник

Last December in Portland, I said that Mozilla needs a more ambitious stance on how we teach the web. My argument: the web is at an open vs. closed crossroads, and helping people build know-how and agency is key if we want to take the open path. I began talking about Mozilla needing to do something in ‘learning’ in ways that can have  the scale and impact of Firefox if we want this to happen.

Mozilla Academy

The question is: what does this look like? We’ve begun talking about developing a common approach and brand for all our learning efforts: something like Mozilla University or Mozilla Academy. And we have a Mozilla Learning plan in place this year to advance our work on Webmaker products, Mozilla Clubs (aka Maker Party all year round), and other key building blocks. But we still don’t have a crisp and concrete vision for what all this might add up to. The idea of a global university or academy begins to get us there.

My task this quarter is to take a first cut at this vision — a consolidated approach  for Mozilla’s efforts in learning. My plan is to start a set of conversations that get people involved in this process. The first step is to start to document the things we already know. That’s what this post is.

What’s the opportunity?

First off, why are we even having this conversation? Here’s what we said in the Mozilla Learning three-year plan:

Within 10 years there will be five billion citizens of the web. Mozilla wants all of these people to know what the web can do. What’s possible. We want them to have the agency, tools and know-how they need to unlock the full power of the web. We want them to use the web to make their lives better. We want them to be full citizens of the web.

We wrote this paragraph right before Portland. I’d be interested to hear what people think about it a few months on?

What do we want to build?

The thing is even if we agree that we want everyone to know what the web can do, we may not yet agree on how we get there. My first cut at what we need to build is this:

By 2017, we want to build a Mozilla Academy: a global classroom and lab for the citizens of the web. Part community, part academy, people come to Mozilla to unlock the power of the web for themselves, their organizations and the world.

This language is more opinionated than what’s in the Mozilla Learning plan: it states we want a global classroom and lab. And it suggests a name.

Andrew Sliwinski has pointed out to me that this presupposes we want to engage primarily with people who want to learn. And, that we might move toward our goals in other ways, including using our product and marketing to help people ‘just figure the right things out’ as they use the web. I’d like to see us debate these two paths (and others) as we try to define what it is we need to build. By the way, we also need to debate the name — Mozilla Academy? Mozilla University? Something else?

What do we want people to know?

We’re fairly solid on this part: we want people to know that the web is a platform that belongs to all of us and that we can all use to do nearly anything.

We’ve spent three years developing Mozilla’s web literacy map to describe exactly what we mean by this. It breaks down ‘what we want people know’ into three broad categories:

  • Exploring the web safely and effectively
  • Building things on the web that matter to you and others
  • Participating on the web as a critical, collaborative human

Helping people gain this know-how is partly about practical skills: understanding enough of the technology and mechanics of the web so they can do what they want to do (see below). But it is also about helping people understand that the web is based on a set of values — like sharing information and human expression — that are worth fighting for.

How do we think people learn these things?

Over the last few years, Mozilla and our broader network  of partners have been working on what we might call ‘open source learning’ (my term) or ‘creative learning’ (Mitch Resnick’s term, which is probably better :)). The first principles of this approach include::

  • Learn by making things
  • Make real shit that matters
  • Do it with other people (or at least with others nearby)

There is another element though that should be manifested in our approach to learning, which is something like ‘care about good’ or even ‘care about excellence’ — the idea that people have a sense of what to aspire to and feedback loops that help them know if they are getting there. This is important both for motivation and for actually having the impact on ‘what people know’ that we’re aiming for.

My strong feeling is that this approach needs to be at the heart of all Mozilla’s learning work. It is key to what makes us different than most people who want to teach about the web — and will be key to success in terms of impact and scale. Michelle Thorne did a good post on how we embrace these principles today at Mozilla. We still need to have a conversation about how we apply this approach to everything we do as part of our broader learning effort.

How much do we want people  to know?

Ever since we started talking about learning five years ago, people have asked: are you saying that everyone on the planet should be a web developer? The answer is clearly ‘no’. Different people need — and want — to understand the web at different levels. I think of it like this:

  • Literacy: use the web and create basic things
  • Skill: know how that gets you a better job / makes your life better
  • Craft: expert knowledge that you hone over a lifetime

There is also a piece that includes  ‘leadership’ — a commitment and skill level that has you teaching, helping, guiding or inspiring others. This is a fuzzier piece, but very important and something we will explore more deeply as we develop a Mozilla Academy.

We want a way to engage with people at all of these levels. The good news is that we have the seeds of an approach for each. SmartOn is an experiment by our engagement teams to provide mass scale web literacy in product and using marketing. Mozilla Clubs, Maker Party and our Webmaker Apps offer deeper web literacy and basic skills. MDN and others are think about teaching web developer skills and craft. Our fellowships do the same, although use a lab method rather than teaching. What we need now is a common approach and brand like Mozilla Academy that connects all of these activities and speaks to a variety of audiences.

What do we have?

It’s really worth making this point again: we already have much of what we need to build an ambitious learning offering. Some of the things we have or are building include:

We also have an increasingly good reputation among people who care about and  fund learning, education and empowerment programs. Partners like MacArthur Foundation, UNESCO, the National Writing Project and governments in a bunch of countries. Many of these organizations want to work with us to build — and be a part of — a more ambitious approach teaching people about the web.

What other things are we thinking about?

In addition to the things we have in hand, people across our community are also talking about a whole range of ideas that could fit into something like a Mozilla Academy. Things I’ve heard people talking about include:

  • Basic web literacy for mass market (SmartOn)
  • Web literacy marketing campaigns with operators
  • Making and learning tools in Firefox (MakerFox)
  • MDN developer conference
  • Curriculum combining MDN + Firefox Dev Edition
  • Developer education program based on Seneca model
  • A network of Mozilla alumni who mentor and coach
  • Ways to help people get jobs based on what they’ve learned
  • Ways to help people make money based on what they’ve learned
  • Ways for people to make money teaching and mentoring with Mozilla
  • People teaching in Mozilla spaces on a regular basis
  • Advanced leadership training for our community
  • Full set of badges and credentials

Almost all of these ideas are at a nascent stage. And many of them are extensions or versions of the things we’re already doing, but with an open source learning angle. Nonetheless, the very fact that these conversations are actively happening makes me believe that we have the creativity and ambition we need to build something like a Mozilla Academy.

Who is going to do all this?

There is a set of questions that starts with ‘who is the Mozilla Academy?’ Is it all people who are flag waving, t-shirt donning Mozillians? Or is it a broader group of people loosely connected under the Mozilla banner but doing their own thing?

If you look at the current collection of people working with Mozilla on learning, it’s both. Last year, we had nearly 10,000 contributors working with us on some aspect of this ‘classroom and lab’ concept. Some of these people are Mozilla Reps, Firefox Student Ambassadors and others heavily identified as Mozillians. Others are teachers, librarians, parents, journalists, scientists, activists and others who are inspired by what we’re doing and want to work alongside us. It’s a big tent.

My sense is that this is the same sort of mix we need if we want to grow: we will want a core set of dedicated Mozilla people and a broader set of people working with us in a common way for a common cause. We’ll need a way to connect (and count) all these people: our tools, skills framework and credentials might help. But we don’t need them all to act or do things in exactly the same way. In fact, diversity is likely key to growing the level of scale and impact we want.

Snapping it all together

As I said at the top of this post, we need to boil all this down and snap it into a crisp vision for what Mozilla — and others — will build in the coming years.

My (emerging) plan is to start this with a series of blog posts and online conversations that delve deeper into the topics above. I’m hoping that it won’t just be me blogging — this will work best if others can also riff on what they think are the key questions and opportunities. We did this process as we were defining Webmaker, and it worked well. You can see my summary of that process here.

In addition, I’m going to convene a number of informal roundtables with people who might want to participate and help us build Mozilla Academy. Some of these will happen opportunistically at events like eLearning Africa in Addis and the Open Education Global conference in Banff that are happening over the next couple of months. Others will happen in Mozilla Spaces or in the offices of partner orgs. I’ll write up a facilitation script so other people can organize their own conversations, as well. This will work best if there is a lot of conversation going on.

In addition to blogging, I plan to report out on progress at the Mozilla All-Hands work week in Whistler this June. By then, my hope is that we have a crisp vision that people can debate and get involved in building out. From there, I expect we can start figuring out how to build some of the pieces we’ll need to pull this whole idea together in 2016. If all goes well, we can use MozFest 2015 as a bit of a barn raising to prototype and share out some of these pieces.

Process-wise, we’ll use the Mozilla Learning wiki to track all this. If you write something or are doing an event, post it there. And, if you post in response to my posts, please put a link to the original post so I see the ping back. Twittering #mozacademy is also a good thing to do, at least until we get a different  name.

Join me in building Mozilla Academy. It’s going to be fun. And important.


Filed under: mozilla

https://commonspace.wordpress.com/2015/03/31/building-an-academy/


QMO: Marcela Oniga: open source fan, Linux enthusiast and proud Mozillian

Вторник, 31 Марта 2015 г. 10:00 + в цитатник

Marcela Oniga has been involved with Mozilla since 2011. She is from Cluj Napoca, Romania and works as a software quality assurance engineer in Bucharest. She has solid Linux system administration skills, based on which she founded her own web hosting company that provides VPS and managed hosting services. She keeps herself focused by being actively involved in many challenging projects. Volunteering is one of her favorite things to do. In her spare time, she plays ping pong and lawn tennis.

Marcela Oniga is from Romania in eastern Europe.

Marcela Oniga is from Romania in eastern Europe.

Hi Marcela! How did you discover the Web?

I guess I discovered the Web when I first installed Firefox. Before that, I had read articles about the Internet in computer magazines.

How did you hear about Mozilla?

I heard about Mozilla in 2010. This was a time when open source conferences and events in Romania were not so popular. Now it is very easy to teach and learn about Mozilla projects; Mozillians are all over.

How and why did you start contributing to Mozilla?

I’m passionate about technology and I’m a big fan of open source philosophy. This is one of the reasons why I founded a non-profit organization called Open Source Open Mind (OSOM) in 2010. OSOM compelled me to contribute to open source projects. Along with other fans of open source, I have organized FLOSS events in many cities across Romania. We organize an annual OSOM conference, through which we support and promote Free, Libre and Open Source Software.

Marcela Oniga speaks at the Open Source Open Mind conference in February 2013.

Marcela Oniga speaks at the Open Source Open Mind conference in February 2013.

Ubuntu was my first open source project. I’m a big Ubuntu fan, Ubuntu Evangelist, an active member of Ubuntu LoCo Romania and also part of the Ubuntu-Women project.

In 2011 Ioana Chiorean and many others started to rebuild the tech community in Romania. At that point Mozilla became the obvious choice for me. I knew it was one of the biggest open-source software projects around and that it was very community-driven.

Have you contributed to any other Mozilla projects in any other way?

I performed quality assurance activities for Firefox for Android and Firefox OS. I also contribute a bit to SUMO and the Army of Awesome. I’m a part of WoMoz, through which I attended AdaCamp, an unconference dedicated to increasing gender diversity in open technology and culture.

Marcela Oniga at AdaCamp 2013, San Francisco. During the make-a-thon session she created a robot badge whose eyes are small LEDs.

Marcela Oniga at AdaCamp 2013, San Francisco. During the make-a-thon session she created a robot badge whose eyes are small LEDs.

What’s the contribution you’re the most proud of?

Firefox OS is my favorite project in Mozilla. I’m very excited about having the chance to meet and work with the Firefox OS QA team. I received lot of support from the team. I’m proud and happy that my small contribution to Firefox OS as a quality assurance engineer matters.

You belong to the Mozilla Romania community. Please tell us more about your community. Is there anything you find particularly interesting or special about it?

We are a bunch of different people with different ideas and views but we are have the same mission: to grow Mozilla and help the open web.

Marcela Oniga along with Alina Mierlus and Alex Lakatos at the Mozilla Romania booth during the World Fair, Mozilla Summit 2013.

Marcela Oniga along with Alina Mierlus and Alex Lakatos at the Mozilla Romania booth during the World Fair, Mozilla Summit 2013.

What’s your best memory with your fellow community members?

My best memory is a trip I took back in 2013 with the coolest Mozilla Romania community members. We went to Fundatica, a commune in the historic region of Transylvania. Best weekend ever!

Marcela Oniga on a 2013 trip to Fundatica, Brasov County, Romania with members of the Mozilla Romania community.

Marcela Oniga on a 2013 trip to Fundatica, Brasov County, Romania with members of the Mozilla Romania community.

What advice would you give to someone who is new and interested in contributing to Mozilla?

I advise everyone to contribute to open source projects, especially to Mozilla. It is an opportunity to learn something new; it’s fun and interesting and you can only gain from it.

Marcela Oniga with other members of WoMoz - Ioana Chiorean, Flore Allemandou and Delphine Leb'edel - in front of the San Francisco Bay Bridge in June 2013.

Marcela Oniga with other members of WoMoz – Ioana Chiorean, Flore Allemandou and Delphine Leb'edel – in front of the San Francisco Bay Bridge in June 2013.

If you had one word or sentence to describe Mozilla, what would it be?

Open minded community that’s making the web a better place.

What exciting things do you envision for you and Mozilla in the future?

I believe Mozilla’s future is bright. Millions of people around the world will help push the open web forward through amazing open source software and new platforms and tools.

Is there anything else you’d like to say or add to the above questions?

Let’s keep the web open :)


The Mozilla QA and Tech Evangelism teams would like to thank Marcela Oniga for her contribution over the past 4 years.

Marcela has been a very enthusiastic contributor to the Firefox OS project. She really “thinks like a tester” when she files a bug, and I enjoy looking at the issues she uncovers during her testing. – Marcia Knous


I met Marcela when she invited me to speak at the OSOM conference in Cluj-Napoca, Romania. It was my first time that far in Eastern Europe and I wasn’t too sure of what to expect there. Granted, I had already met Ioana and a few other Mozilla reps from Romania at the whirlpool of activity which is MozFest, but sadly it was just briefly lived interactions.

However, Marcela shone from day one. She organised everything super efficiently, told me what they needed from me, ensured all my questions were answered promptly and made us feel like at home. On the day of the conference, she orchestrated the strings and everything just fell into place, as if it was the most natural thing to do.

Probably the thing I liked most is that she is an accomplished, quiet leader. You don’t need to be loud and immensely popular to be successful. Passion and hard, constant work are what actually matters. Marcela is passionate about doing good and good things, and her dedication is nothing short of spectacular.

I’m equally proud and humbled that she chose to contribute to Mozilla. Thanks for being so excellent, Marcela! – Soledad Penad'es

https://quality.mozilla.org/2015/03/marcela-oniga-open-source-fan-linux-enthusiast-and-proud-mozillian/


Byron Jones: bugzilla.mozilla.org’s new look

Вторник, 31 Марта 2015 г. 09:32 + в цитатник

this quarter i’ve been working on redesigning how bugs are viewed and edited on bugzilla.mozilla.org — expect large changes to how bmo looks and feels!

unsurprisingly some of the oldest code in bugzilla is that which displays bugs; it has grown organically over time to cope with the many varying requirements of its users worldwide.  while there has been ui improvements over time (such as the sandstone skin), we felt it was time to take a step back and start looking at bugzilla with a fresh set of eyes. we wanted something that was designed for mozilla’s workflow, that didn’t look like it was designed last century, and would provide us with a flexible base upon which we could build further improvements.

a core idea of the design is to load the bug initially in a read-only “view” mode, requiring the user to click on an “edit” button to make most changes. this enables us to defer loading of a lot of data when the page is initially loaded, as well as providing a much cleaner and less overwhelming view of bugs.

bug-modal-1

major interface changes include:

  • fields are grouped by function, with summaries of the functional groups where appropriate
  • fields which do not have a value set are not shown
  • an overall “bug summary” panel at the top of the bug should provide an “at a glance” status of the bug

the view/edit mode:

  • allows for deferring of loading data only required while editing a bug (eg. list of all products, components, versions, milestones, etc)
    • this results in 12% faster page loads on my development system
  • still allows for common actions to be performed without needing to switch modes
    • comments can always be added
    • the assignee can change the bug’s status/resolution
    • flag requestee can set flags

bug-modal-2

you can use it today!

this new view has been deployed to bugzilla.mozilla.org, and you can enable it by setting the user preference “experimental user interface” to “on”.

you can also enable it per-bug by appending &format=modal to the url (eg. https://bugzilla.mozilla.org/show_bug.cgi?id=1096798&format=modal).  once enabled you can disable it per-bug by appending &format=default to the url.

what next?

there’s still a lot to be done before there’s feature parity between the new modal view and the current show_bug.  some of the major items missing with the initial release include:

  • cannot edit cc list (cannot remove or add other people)
  • comment previews
  • comment tagging (existing tags are shown, cannot add/delete tags)
  • cc activity is not visible
  • bulk comment collapsing/expanding (all, by tag, tbpl push bot)
  • alternative ordering of comments (eg. newest-first)
  • bmo show_bug extensions (eg mozreview, orange factor, bounty tracking, crash signature rendering)

you can view the complete list of bugs, or file a new bug if you discover something broken or missing that hasn’t already been reported.


Filed under: bmo

https://globau.wordpress.com/2015/03/31/bmo-new-look/


Byron Jones: happy bmo push day!

Вторник, 31 Марта 2015 г. 09:01 + в цитатник

the following changes have been pushed to bugzilla.mozilla.org:

  • [1146806] “new bug” menu has literal “…” instead of a horizontal ellipsis
  • [1146360] remove the winqual bug entry form
  • [1147267] the firefox “iteration” and “points” fields are visible on all products
  • [1146886] after publishing a review with splinter, the ‘edit’ mode doesn’t work
  • [1138767] retry and/or avoid push_notify deadlocks
  • [1147550] Require a user to change their password if they log in and their current password does not meet the password complexity rules
  • [1147738] the “Rank” field label is visible when editing, even if the field itself isn’t
  • [1147740] map format=default to format=__default__
  • [1146762] honour gravatar visibility preference
  • [1146910] Button styles are inconsistent and too plentiful
  • [1146906] remove background gradient from assignee and reporter changes
  • [1125987] asking for review in a restricted bug doesn’t work as expected (“You must provide a reviewer for review requests” instead of “That user cannot access that bug” error)
  • [1149017] differentiate between the bug’s short-desc and the bug’s status summary in the header
  • [1149026] comment/activity buttons are not top-aligned
  • [1141770] merge_users.pl fails if the two accounts have accessed the same bug and is in the bug_interest table
  • [972040] For bugs filed against Trunk, automatically set ‘affected’ release-tracking flags
  • [1149233] Viewing a bug with timetracking information fails: file error – formattimeunit: not found
  • [1149390] “duplicates” are missing from the modal view
  • [1149038] renaming a tracking flag isn’t clearing a memcached cache, resulting in Can’t locate object method “cf_status_thunderbird_esr39'' via package “Bugzilla::Bug” errors

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

https://globau.wordpress.com/2015/03/31/happy-bmo-push-day-133/



Поиск сообщений в rss_planet_mozilla
Страницы: 472 ... 142 141 [140] 139 138 ..
.. 1 Календарь