Re-dreaming Firefox (1): Firefox Agents

May 29, 2015 § 9 Comments

Gerv’s recent post on the Jeeves Test got me thinking of the Firefox of my dreams. So I decided to write down a few ideas on how I would like to experience the web. Today: Firefox Agents. Let me emphasise that the features described in this blog post do not exist.

Marcel uses Firefox every day, for quite a number of things.

  • He uses Firefox for fun, for watching videos and playing online games. For this purpose, he has installed a few tools for finding and downloading videos. Also, one of his main search engines is YouTube. Suggested movies? Sure, as long as they are fun.
  • He uses Firefox for social networks. He follows his friends, he searches on Facebook, or Twitter, or Google+. If anything looks fun, or useful, he’d like to be informed.
  • He uses Firefox for managing his bank accounts, his taxes, his health insurance. For this purpose, he has paranoid security settings – to avoid phishing, he can only browse to a few whitelisted websites – and no add-ons. He may be interested in getting information from these few websites, and in security updates, but that’s about it. Also, since Firefox handles all his passwords, it must itself be protected by a password.
  • He uses Firefox to read his Gmail account. And to read his other Gmail account. And he doesn’t want to leak privacy information by doing so on the same Firefox that he’s using for browsing.
  • Oh, and he may also be using Firefox for browsing websites that are sensitive for any kind of reason, whether he’s hunting for gifts for his close family, dating online, chatting with hackers, discussing politics, helping NGOs in sensitive parts of the globe, visiting BitTorrent trackers, consulting a physician through some online service, or, well, anything else that requires privacy. He’d like to perform such browsing with additional anonymity guarantees. This also means locking Firefox with a password.
  • Sometimes, his children or friends borrow his computer and use Firefox, too.

Of course, since Marcel brings his own device at (or from) work, that’s the same Firefox that he’s using for all of these tasks, and he’s probably even doing several of these tasks at the same time.

So, Marcel has a set of contradictory requirements, not to mention that each of his uses of Firefox needs to pass a distinct Jeeves Test. How do we keep him happy nevertheless?

Introducing Firefox Agents

In the rest of this post, I will be calling each of these uses of Firefox an Agent (if we ever implement this feature, it will, of course, be called Persona). Each Agent matches one way you use Firefox. While Firefox may be delivered with a predefined set of Agents, users can easily create new Agents. In the example, Marcel has his “Fun Agent”, his “Social Agent”, his “Work Agent”, etc.

Each Agent is unique and separate:

  • Each Agent has its own icon on Marcel’s menu/desktop/tablet/phone and task list.
  • Each Agent has its own visual identity, to make sure that work-related stuff doesn’t end up accidentally in the Fun Agent.
  • Each Agent has its own set of preferences, bookmarks, remembered passwords, cookies, cache, and add-ons.
  • Each website may be connected to a given Agent, so that links received through Gmail or through Thunderbird, for instance, automatically open with the right Agent.

As a consequence, any technology that can come bundled with Firefox to, for instance, provide search suggestions or any other kind of website suggestions is tied to an Agent. For instance, Marcel’s browsing a dating site, or shopping for shoes, or having religious activities will not be visible to any of his colleagues looking above his shoulder at his Work Agent, nor will it be tied to either of Marcel’s Gmail accounts. This greatly increases the chances of suggestion technologies passing the Jeeves Test.

Agents are also connected:

  • A menu in each Agent, as well as a keyboard shortcut, lets users quickly open/switch to other Agents.
  • When an Agent follows a link to a website that belongs to another Agent, the relevant Agent opens automatically.
  • Bookmarks may be pushed, on demand, from one Agent to another one.
  • Passwords may be pulled, on demand, from one Agent to another one.

How far are we from Agents?

Technologically speaking, Firefox Agents almost exist. Indeed, Firefox has supported Profiles forever, since way before Firefox 1.0. I generally have three instances of Firefox opened at the same time (four when I’m doing web development), and it works nicely.

With a few add-ons, you can get almost everything, although not entirely connected together:

  • Profilist helps a lot with switching between profiles, and the dev version adds distinct icons;
  • Firefox Themes implement distinct appearances;
  • there are add-ons implementing whitelist browsing;
  • there are add-ons implementing password-protected Firefox.

A few features are missing, but as you can see, the list is actually quite short:

  • Pushing/pulling passwords and bookmarks between Agents (although that’s a subset of what Firefox Accounts can do).
  • Attaching specific websites to specific Agents (although this doesn’t seem too difficult to implement).
  • Connecting this all together.

What now?

I would like to browse with this Firefox. Would you?

Detecting slow add-ons

May 6, 2015 § 13 Comments

When it is at its best, Firefox is fast. Really, really fast. When things start slowing down, though, using Firefox is much less fun. So, one of main objectives of the developers of Firefox is making sure that Firefox is and remains as smooth and responsive as humanly possible. There is, however, one thing that can slow down Firefox, and that remains out of the control of the developers: add-ons. Good add-ons are extraordinary, but small coding errors – or sometimes necessary hacks – can quickly drive the performance of Firefox into the ground.

So, how can an add-on developer (or add-on reviewer) find out whether her add-on is fast? Sadly, not much. Testing certainly helps, and the Profiler is invaluable to help pinpoint a slowdown once it has been noticed, but what about the performance of add-ons in everyday use? What about the experience of users?

To solve this issue, we decided to work on a set of tools to help add-on developers and reviewers find out the performance of their add-ons. Oh, and also to let users find out quickly if an add-on is slowing down their everyday experience.


On recent Nightly builds of Firefox, you may now open about:performance to get an overview of the performance cost of add-ons and webpages :

Screen Shot 2015-05-06 at 17.46.15

The main resources we monitor are :

  • jank, which measures how much the add-on impacts the responsiveness of Firefox. For 60fps performance, jank should always remain ≤ 4. If an add-on regularly causes jank to increase past 6, you should be worried.
  • CPOW aka blocking cross-process communications, which measures how much the add-on is causing Firefox to freeze waiting for a process to respond. Anything above 0 is bad.

Note that the design of this page is far from stable. I realise it’s not very user-friendly at the moment, so don’t hesitate to file bugs to help us improve it. Also note that, when running with e10s, the page doesn’t display all the useful information. We are working on it.

add-on Telemetry

Add-on developers and reviewers can now find information on the performance of their add-ons on a dedicated dashboard.

These are real-world performance data, as extracted from user’s computers. The two histograms available for the time being are:

  • MISBEHAVING_ADDONS_JANK_LEVEL, which measures the jank, as detailed above;
  • MISBEHAVING_ADDONS_CPOW_TIME_MS, which measure the amount of time spent in CPOW, as detailed above.

If you are an add-on developer, you should monitor regularly the performance of your add-on on this page. If you notice suspicious values, you should try and find out what causes these performance issues. Don’t hesitate and reach out to us, we will try and help you.

Slow add-on Notification

Add-on developers and reviewers, as well as end-users, are now informed when an add-on causes either jank or CPOW performance issues:

Screen Shot 2015-05-06 at 19.16.19

Note that this feature is not ready to ride the trains, and we do not have a specific idea of when it will be made available for users of Aurora/DeveloperEdition. This is partly because the UX is not good enough yet, partly because the thresholds will certainly change, and partly because we want to give add-on developers time to fix any issue before the users see a dialog that suggest that an add-on should be uninstalled.

Performance Stats API

By the way, we have an API for accessing performance stats. Very imaginatively, it’s called PerformanceStats.jsm [link]. While this API will still change during the coming weeks you can start playing with it if you are interested. Some add-ons may be able to throttle their performance use based on this data. Also, I hope that, in time, someone will be able to write a version of about:performance much nicer than mine :)

Challenges and work ahead

For the moment, we are in the process of stabilizing the API, its implementation and its performance. In parallel, we are working on making the UX of about:performance more useful. Once both are done, we are going to proceed with adding more measurements, making the code more e10s-friendly and measuring the performance of webpages.

If you are an add-on developer and if you feel that your add-on is tagged as slow by error, or if you have great ideas on how to make this data useful, feel free to ping me, preferably on IRC. You can find me on, channel #developers, where I am Yoric.

Je Suis Charlie, but I Am Vigilant

January 13, 2015 § 2 Comments

(This text has initially been written for the French-speaking Mozilla Community. Most members of the community haven’t had a chance to review or sign it yet.)


I am Charlie. Some of us grew up with Cabu’s children cartoons or Wolinkski’s willies. Some of us laughed at Charb’s cover pages, others much less, and yet others had never even heard of Charlie Hebdo. Despite our differences, from the bottom of our heart, we are with those who defend Free Speech, the right to discuss, draw, make laugh or cringe.

I am a Cop. Some among us work directly with law enforcement, or ensure the online safety of individuals or organizations. Some of us make their voice heard when legal or executive powers around the world decide that security, convenience or economic interests matter more than the rights of users. All, we salute the police officers murdered or wounded these last few days as they attempted to save innocents.

I am Jew, or Muslim, or Anything else. Some among us are Jew, or Muslim, or Christian, or anything else, and, frankly, most of us don’t care who is what. All, we are horrified that, in the 21st century, extremists may still decide to murder innocents, solely because they might be Jew, and because they had decided to go the grocery store. All, we are appalled that, in the 21st century, extremists may still decide to attack innocents, just because they might be Muslems, through threats, physical violence or through their places of cult. All, we are shocked whenever opportunists praise murders or violence, or call for hatred or the ones or the others.

I am Collateral. Before we are the Mozilla Community, we are a community of individuals. Any one of us could have been at the front desk of this building, or on the path of that car, hostage or collateral kill of the assassins. Our minute of silence is for the anonymous ones, too.

I am Vigilant. Some of us believe that strong and immediate measures must be taken. Others prefer to wait until the emotion has passed before we can think of an appropriate response. All, we wait to see how the attacks of January 7th and January 9th 2015 will change our society. All, we remain vigilant, to make sure that, on top of the blood of the dead, our society does not choose to sacrifice Human Rights, Free Speech and Privacy, in the name of a securitarian ideology or opportunistic economical interests.

I am the French-speaking Mozilla Community.

Text edited by myself. List of signatures of the French version.

Je Suis Charlie

January 7, 2015 § Leave a comment


Vous souhaitez apprendre à développer des Logiciels Libres ?

November 29, 2014 § Leave a comment

Cette année, la Communauté Mozilla propose à Paris un cycle de Cours/TDs autour du Développement de Logiciels Libres.

Au programme :

  • comment se joindre à un projet existant ;
  • comment communiquer dans une équipe distribuée ;
  • comment financer un projet de logiciel libre ;
  • qualité du code ;
  • du code !
  • (et beaucoup plus).

Pour plus de détails, et pour vous inscrire, tout est ici.

Attention, les cours commencent le 8 décembre !

RFC: We deserve better than runtime warnings

November 20, 2014 § 2 Comments

Consider the following scenario:

  1. Module A prints warnings when it’s used incorrectly;
  2. Module B uses module A correctly;
  3. Some future refactoring of module B starts using module A incorrectly, hence displaying the warnings;
  4. Nobody realises for months, because we have too many warnings;
  5. Eventually, something breaks.

How often has this happened to everyone of us?

This scenario has many variants (e.g. module A changed and nobody realized that module B is now in a situation it misuses module A), but they all boil down to the same thing: runtime warnings are designed to be lost, not fixed. To make things worse, many of our warnings are not actionable, simply because we have no way of knowing where they come from – I’m looking at you, Cu.reportError.

So how do we fix this?

We would certainly save considerable amounts of time if warnings caused immediate assertion failures, or alternatively test failures (i.e. fail, but only when running the unit tests). Unfortunately, we can do neither, as we have a number of tests that trigger the warnings either

  • by design (e.g. to check that we can recover from such misuses of A, or because we still need a now-considered-incorrect use of an API to keep working until we have ported all the clients to the better API);
  • or for historical reasons (e.g. the now incorrect use of A used to be correct, but we haven’t fixed all tests that depend on it yet).

However, I believe that causing test failures is still the solution. We just need a mechanism that supports a form of whitelisting to cope with the aforementioned cases.

Introducing RuntimeAssert

RuntimeAssert is an experiment at provoding a standard mechanism to replace warnings. I have a prototype implemented as part of bug 1080457. Feedback would be appreciated.

The key features are the following:

  • when a test suite is running, a call to `RuntimeAssert` causes the test suite to fail;
  • when a test suite is running, a call to `RuntimeAssert` contains at least the filename/line number of where it was triggered, preferably a stack wherever available;
  • individual tests can whitelist families of calls to `RuntimeAssert` and mark them either as expected;
  • individual tests can whitelist families of calls to `RuntimeAssert` and mark them as pending fix;
  • when a test suite is not running, a call to `RuntimeAssert` does nothing costly (it may default to PR_LOG or Cu.reportError).

Possible API:

  • in JS, we trigger a test failure by calling, string or Error) from production code;
  • in C++, we likewise trigger a test failure by calling MOZ_RUNTIME_ASSERT(keyword, string);
  • in the testsuite, we may whitelist errors by calling Assert.whitelist.expected(keyword, regexp)  or Assert.whitelist.FIXME(keyword, regexp).


// Module
let MyModule = {
  oldAPI: function(foo) {“Deprecation”, “Please use MyModule.newAPI instead of MyModule.oldAPI”);
    // ...
  newAPI: function(foo) {
    // ...

let MyModule2 = {
  api: function() {
    return somePromise().then(null, error => {“MyModule2.api”, error);
      // Rather than leaving this error uncaught, let’s make it actionable.

  api2: function(date) {
    if (typeof date == “number”) {“MyModule2.api2”, “Passing a number has been deprecated, please pass a Date”);
      date = new Date(date);
    // ...

// Whitelisting a RuntimeAssert in a test.

// This entire test is about MyModule.oldAPI, warnings are normal.
Assert.whitelist.expected(“Deprecation”, /Please use MyModule.newAPI/);

// We haven’t fixed all calls to MyModule2.api2, so they should still warn, but not cause an orange.
Assert.whitelist.FIXME(“MyModule2.api2”, /please pass a Date/);

Assert.whitelist.expected(“MyModule2.api”, /TypeError/, function() {
  // In this test, we will trigger a TypeError in MyModule2.api, that’s entirely expected.
  // Ignore such errors within the (async) scope of this function.


In the long-term, I believe that RuntimeAssert (or some other mechanism) should replace almost all our calls to Cu.reportError.

In the short-term, I plan to use this for reporting

  • uncaught Promise rejections, which currently require a bit too much hacking for my tastes;
  • errors in XPCOM.lazyModuleGetter & co;
  • failures during AsyncShutdown;
  • deprecation warnings as part of Deprecated.jsm.

The Future of Promise

November 19, 2014 § Leave a comment

If you are writing JavaScript in mozilla-central or in an add-on, or if you are writing WebIDL code, by now, you have probably made use of Promise. You may even have noticed that we now have several implementations of Promise in mozilla-central, and that things are moving fast, and sometimes breaking.
At the moment, we have two active implementations of Promise:
(as well as a little code using an older, long deprecated, implementation of Promise)
This is somewhat confusing, but the good news is that we are working hard at making it simpler and moving everything to DOM Promise.

General Overview

Many components of mozilla-central have been using Promise for several years, way before a standard was adopted, or even discussed. So we had to come up with our implementation(s) of Promise. These implementations were progressively folded into Promise.jsm, which is now used pervasively in mozilla-central and add-ons.
In parallel, Promise were specified, submitted for standardisation, implemented in Firefox, and finally standardised. This is the second implementation we call DOM Promise. This implementation is starting to be used in many places on the web.
Having two implementations of Promise with the same feature set doesn’t make sense. Fortunately, Promise.jsm was designed to match the API of Promise that we believed would be standardised, and was progressively refactored and extended to follow these developments, so both APIs are almost identical.
Our objective is to move entirely to DOM Promise. There are still a few things that need to happen before this is possible, but we are getting close. I hope that we can get there by the end of 2014.

Missing pieces

Debugging and testing

At the moment, Promise.jsm is much better than DOM Promise in two aspects:
  • it is easier to inspect a promise from Promise.jsm for debugging purposes (not anymore, things have been moving fast while I was writing this blog entry);
  • Promise.jsm integrates nicely in the test suite, to make sure that uncaught errors are reported and cause test failures.
In both topics, we are hard at work bringing DOM Promise to feature parity with Promise.jsm and then some (bug 989960, bug 1083361). Most of the patches are in the pipeline already.

API differences

  • Promise.jsm offers an additional function Promise.defer, which didn’t make it to standardization.
This function may easily be written on top of DOM Promise, so this is not a hard blocker. We are going to add this function to a module `PromiseUtils.jsm`.
  • Also, there is a slight bug in DOM Promise that gives it a slightly unexpected behavior in a few edge cases. This should not hit developers who use DOM Promise as expected, but this might surprise people who know the exact scheduling algorithm and expect it to be consistent between Promise.jsm and DOM Promise.

Oh, wait, that’s fixed already.

Wrapping it up

Once we have done all of this, we will be able to replace Promise.jsm with an empty shell that defers all implementations to DOM Promise. Eventually, we will deprecate and remove this module.

As a developer, what should I do?

For the moment, you should keep using Promise.jsm, because of the better testing/debugging support. However, please do not use Promise.defer. Rather, use PromiseUtils.defer, which is strictly equivalent but is not going away.
We will inform everyone once DOM Promise becomes the right choice for everything.
If your code doesn’t use Promise.defer, migrating to DOM Promise should be as simple as removing the line that imports Promise.jsm in your module.

Get every new post delivered to your Inbox.

Join 37 other followers