Dreaming the Internet of Things

February 17, 2016 § Leave a comment

One of these days, using the Cloud of OpaqueCompany ™, I will be able to set the colour of my lightbulbs by talking to my TV. Somewhere along the way, my house will become a little bit more energy hungry and a little bit more dependent on the Cloud of OpaqueCompany(tm) . That’s the promise of the Internet of Things. Isn’t that neat? Isn’t that exciting?

Not really. At least, not for me. But, for some reason, whenever I read about that Internet of Things, it is about expensive gadgets that, to me, sounds like Christmas commercials:  marginally useful, designed by marketers for spoilt westerners to be consumed then forgotten before the next Christmas shopping spree.

But this doesn’t have to be.


I have spent a little time scratching the surface and trying to determine whether there was something more to this Internet of Things, beside the shopping list. I came back convinced that, once you forget the marketing, this Internet of Things can become a revolution as big as the Personal Computer or the World Wide Web – at least if we let it fall into the right hands.

Say you are the owner or manager of a small commerce, say a restaurant. Chances are that you need a burglar alarm, either because you fear that you are going to be burglarised, or because your insurance requires one. You have two solutions. Either you go to a store and buy some off-the-shelf product, or you contract a company, draw a list of requirements and pay for a custom setup. In either case, you are a consumer, and you are stuck with what you paid for. But needs change. Perhaps the insurance policies now requires you to have an alarm that can call the police automatically. Perhaps neighbours complained about the noise of the alarm and you need to turn it into a silent alarm that rings your cellphone. Perhaps the insurance has changed their policy and now requires you to take pictures of the burglary. Perhaps you have had work done and the small window in the bathroom is now large enough that it could be used to break in. Or water damage has destroyed one of your sensors and you need to replace it, but the model doesn’t exist anymore. Or you are tired of triggering the alarm when you take out the garbage and need to refine the policy. Of your product was linked to a subscription, to call the police on your behalf, but the provider has stopped this service. In any of these cases, you are probably stuck. Because your needs have made you a consumer and you are served only as long as there is a market for your specific need.

Now, consider an alternate universe, in which you just need to walk or drive to the nearest store, buy a few off-the-shelf motion detectors, for the price of a few dollars and simply attach them in your restaurant, where you see fit. They use open standards, so you can install an app to get them to work together, or even better, use your cellphone to script them visually into doing what you need. Do you need to add one or ten, or replace them with different models, or add door-lock sensors? It’s just as easy. Do you need to add a camera? Well, place it and use your cellphone to add that camera to your script. Use your cellphone again and customise the effect, to call the police, or ring your cellphone, or deactivate a single alarm between 11pm and 11.30pm, because that’s when you take out the trash. And if your product is linked to a subscription, because it uses open standards, you can switch provider as needed. In this universe, the Internet of Things has put you in control – not a Cloud, not a silo – and drastically cut your costs and your dependencies.

A few months ago, Mozilla has started pivoting from SmartPhones to the Web of Things – that’s the name we give to Internet of Things done right, with open standards, you in charge, rather than silos and Opaque Cloud ™. I can make no promise that we are going to succeed, but I believe in the huge potential of this Web of Things.

By the way, it doesn’t stop at restaurants. The exact same open standards can help you guard against fires in your house or humidity in your server room. Or crowdsourcing flood detection in cities exposed to flash floods or automating experiments in a physics lab. Or watching your heartbeat or listening to your snores. Or determining which part of the village farm needs to be irrigated in priority or which part of the sewers need most attention.

Some of these problems already have commercial solutions. But what about your next problem, the one that hasn’t attracted the attention of any company large enough to produce devices specifically for you?

Here is to the Web of Things. Let’s make sure that it falls into the right hands.

Designing the Firefox Performance Monitor (2): Monitoring Add-ons and Webpages

November 6, 2015 § Leave a comment

In part 1, we discussed the design of time measurement within the Firefox Performance Monitor. Despite the intuition, the Performance Monitor had neither the same set of objectives as the Gecko Profiler, nor the same set of constraints, and we ended up picking a design that was not a sampling profiler. In particular, instead of capturing performance data on stacks, the Monitor captures performance data on Groups, a notion that we have not discussed yet. In this part, we will focus on bridging the gap between our low-level instrumentation and actual add-ons and webpages, as may be seen by the user.

« Read the rest of this entry »

Designing the Firefox Performance Stats Monitor, part 1: Measuring time without killing battery or performance

October 27, 2015 § Leave a comment

For a few versions, Firefox Nightly has been monitoring the performance of add-ons, thanks to the Performance Stats API. While we are waiting for the greenlight to let it graduate to Firefox Aurora, as well as investigating a few lingering false-positives, and while v2 is approaching steadily, it is time for a brain dump on this toolbox and its design.

The initial objective of this monitor is to be able to flag both add-ons and webpages that cause noticeable slowdowns, so as to let users disable/close whatever is making their use of Firefox miserable. We also envision more advanced uses that could let us find out if features of webpages cause slowdowns on specific OS/hardware combinations.

« Read the rest of this entry »

Detecting slow add-ons

May 6, 2015 § 13 Comments

When it is at its best, Firefox is fast. Really, really fast. When things start slowing down, though, using Firefox is much less fun. So, one of main objectives of the developers of Firefox is making sure that Firefox is and remains as smooth and responsive as humanly possible. There is, however, one thing that can slow down Firefox, and that remains out of the control of the developers: add-ons. Good add-ons are extraordinary, but small coding errors – or sometimes necessary hacks – can quickly drive the performance of Firefox into the ground.

So, how can an add-on developer (or add-on reviewer) find out whether her add-on is fast? Sadly, not much. Testing certainly helps, and the Profiler is invaluable to help pinpoint a slowdown once it has been noticed, but what about the performance of add-ons in everyday use? What about the experience of users?

To solve this issue, we decided to work on a set of tools to help add-on developers and reviewers find out the performance of their add-ons. Oh, and also to let users find out quickly if an add-on is slowing down their everyday experience.

about:performance

On recent Nightly builds of Firefox, you may now open about:performance to get an overview of the performance cost of add-ons and webpages :

Screen Shot 2015-05-06 at 17.46.15

The main resources we monitor are :

  • jank, which measures how much the add-on impacts the responsiveness of Firefox. For 60fps performance, jank should always remain ≤ 4. If an add-on regularly causes jank to increase past 6, you should be worried.
  • CPOW aka blocking cross-process communications, which measures how much the add-on is causing Firefox to freeze waiting for a process to respond. Anything above 0 is bad.

Note that the design of this page is far from stable. I realise it’s not very user-friendly at the moment, so don’t hesitate to file bugs to help us improve it. Also note that, when running with e10s, the page doesn’t display all the useful information. We are working on it.

add-on Telemetry

Add-on developers and reviewers can now find information on the performance of their add-ons on a dedicated dashboard.

These are real-world performance data, as extracted from user’s computers. The two histograms available for the time being are:

  • MISBEHAVING_ADDONS_JANK_LEVEL, which measures the jank, as detailed above;
  • MISBEHAVING_ADDONS_CPOW_TIME_MS, which measure the amount of time spent in CPOW, as detailed above.

If you are an add-on developer, you should monitor regularly the performance of your add-on on this page. If you notice suspicious values, you should try and find out what causes these performance issues. Don’t hesitate and reach out to us, we will try and help you.

Slow add-on Notification

Add-on developers and reviewers, as well as end-users, are now informed when an add-on causes either jank or CPOW performance issues:

Screen Shot 2015-05-06 at 19.16.19

Note that this feature is not ready to ride the trains, and we do not have a specific idea of when it will be made available for users of Aurora/DeveloperEdition. This is partly because the UX is not good enough yet, partly because the thresholds will certainly change, and partly because we want to give add-on developers time to fix any issue before the users see a dialog that suggest that an add-on should be uninstalled.

Performance Stats API

By the way, we have an API for accessing performance stats. Very imaginatively, it’s called PerformanceStats.jsm [link]. While this API will still change during the coming weeks you can start playing with it if you are interested. Some add-ons may be able to throttle their performance use based on this data. Also, I hope that, in time, someone will be able to write a version of about:performance much nicer than mine🙂

Challenges and work ahead

For the moment, we are in the process of stabilizing the API, its implementation and its performance. In parallel, we are working on making the UX of about:performance more useful. Once both are done, we are going to proceed with adding more measurements, making the code more e10s-friendly and measuring the performance of webpages.

If you are an add-on developer and if you feel that your add-on is tagged as slow by error, or if you have great ideas on how to make this data useful, feel free to ping me, preferably on IRC. You can find me on irc.mozilla.org, channel #developers, where I am Yoric.

Vous souhaitez apprendre à développer des Logiciels Libres ?

November 29, 2014 § Leave a comment

Cette année, la Communauté Mozilla propose à Paris un cycle de Cours/TDs autour du Développement de Logiciels Libres.

Au programme :

  • comment se joindre à un projet existant ;
  • comment communiquer dans une équipe distribuée ;
  • comment financer un projet de logiciel libre ;
  • qualité du code ;
  • du code !
  • (et beaucoup plus).

Pour plus de détails, et pour vous inscrire, tout est ici.

Attention, les cours commencent le 8 décembre !

The Future of Promise

November 19, 2014 § Leave a comment

If you are writing JavaScript in mozilla-central or in an add-on, or if you are writing WebIDL code, by now, you have probably made use of Promise. You may even have noticed that we now have several implementations of Promise in mozilla-central, and that things are moving fast, and sometimes breaking.
At the moment, we have two active implementations of Promise:
(as well as a little code using an older, long deprecated, implementation of Promise)
This is somewhat confusing, but the good news is that we are working hard at making it simpler and moving everything to DOM Promise.

General Overview

Many components of mozilla-central have been using Promise for several years, way before a standard was adopted, or even discussed. So we had to come up with our implementation(s) of Promise. These implementations were progressively folded into Promise.jsm, which is now used pervasively in mozilla-central and add-ons.
In parallel, Promise were specified, submitted for standardisation, implemented in Firefox, and finally standardised. This is the second implementation we call DOM Promise. This implementation is starting to be used in many places on the web.
Having two implementations of Promise with the same feature set doesn’t make sense. Fortunately, Promise.jsm was designed to match the API of Promise that we believed would be standardised, and was progressively refactored and extended to follow these developments, so both APIs are almost identical.
Our objective is to move entirely to DOM Promise. There are still a few things that need to happen before this is possible, but we are getting close. I hope that we can get there by the end of 2014.

Missing pieces

Debugging and testing

At the moment, Promise.jsm is much better than DOM Promise in two aspects:
  • it is easier to inspect a promise from Promise.jsm for debugging purposes (not anymore, things have been moving fast while I was writing this blog entry);
  • Promise.jsm integrates nicely in the test suite, to make sure that uncaught errors are reported and cause test failures.
In both topics, we are hard at work bringing DOM Promise to feature parity with Promise.jsm and then some (bug 989960, bug 1083361). Most of the patches are in the pipeline already.

API differences

  • Promise.jsm offers an additional function Promise.defer, which didn’t make it to standardization.
This function may easily be written on top of DOM Promise, so this is not a hard blocker. We are going to add this function to a module `PromiseUtils.jsm`.
  • Also, there is a slight bug in DOM Promise that gives it a slightly unexpected behavior in a few edge cases. This should not hit developers who use DOM Promise as expected, but this might surprise people who know the exact scheduling algorithm and expect it to be consistent between Promise.jsm and DOM Promise.

Oh, wait, that’s fixed already.

Wrapping it up

Once we have done all of this, we will be able to replace Promise.jsm with an empty shell that defers all implementations to DOM Promise. Eventually, we will deprecate and remove this module.

As a developer, what should I do?

For the moment, you should keep using Promise.jsm, because of the better testing/debugging support. However, please do not use Promise.defer. Rather, use PromiseUtils.defer, which is strictly equivalent but is not going away.
We will inform everyone once DOM Promise becomes the right choice for everything.
If your code doesn’t use Promise.defer, migrating to DOM Promise should be as simple as removing the line that imports Promise.jsm in your module.

What David Did During Q3

September 30, 2014 § 6 Comments

September is ending, and with it Q3 of 2014. It’s time for a brief report, so here is what happened during the summer.

Session Restore

After ~18 months working on Session Restore, I am progressively switching away from that topic. Most of the main performance issues that we set out to solve have been solved already, we have considerably improved safety, cleaned up lots of the code, and added plenty of measurements.

During this quarter, I have been working on various attempts to optimize both loading speed and saving speed. Unfortunately, both ongoing works were delayed by external factors and postponed to a yet undetermined date. I have also been hard at work on trying to pin down performance regressions (which turned out to be external to Session Restore) and safety bugs (which were eventually found and fixed by Tim Taubert).

In the next quarter, I plan to work on Session Restore only in a support role, for the purpose of reviewing and mentoring.

Also, a rant The work on Session Restore has relied heavily on collaboration between the Perf team and the FxTeam. Unfortunately, the resources were not always available to make this collaboration work. I imagine that the FxTeam is spread too thin onto too many tasks, with too many fires to fight. Regardless, the symptom I experienced is that during the course of this work, both low-priority, high-priority and safety-critical patches have been left to rot without reviews, despite my repeated requests, for 6, 8 or 10 weeks, much to the dismay of everyone involved. This means man·months of work thrown to /dev/null, along with quarterly objectives, morale, opportunities, contributors and good ideas.

I will try and blog about this, eventually. But please, in the future, everyone: remember that in the long run, the priority of getting reviews done (or explaining that you’re not going to) is a quite higher than the priority of writing code.

Async Tooling

Many improvements to Async Tooling landed during Q3. We now have the PromiseWorker, which simplifies considerably the work of interacting between the main thread and workers, for both Firefox and add-on developers. I hear that the first add-on to make use of this new feature is currently being developed. New features, bugfixes and optimizations landed for OS.File. We have also landed the ability to watch for changes in a directory (under Windows only, for the time being).

Sadly, my work on interactions between Promise and the Test Suite is currently blocked until the DevTools team manages to get all the uncaught asynchronous errors under control. It’s hard work, and I can understand that it is not a high priority for them, so in Q4, I will try to find a way to land my work and activate it only for a subset of the mochitest suites.

Places

I have recently joined the newly restarted effort to improve the performance of Places, the subsystem that handles our bookmarks, history, etc. For the moment, I am still getting warmed up, but I expect that most of my work during Q4 will be related to Places.

Shutdown

Most of my effort during Q3 was spent improving the Shutdown of Firefox. Where we already had support for shutting down asynchronously JavaScript services/consumers, we now also have support for native services and consumers. Also, I am in the process of landing Telemetry that will let us find out the duration of the various stages of shutdown, an information that we could not access until now.

As it turns out, we had many crashes during asynchronous shutdown, a few of them safety-critical. At the time, we did not have the necessary tools to determine to prioritize our efforts or to find out whether our patches had effectively fixed bugs, so I built a dashboard to extract and display the relevant information on such crashes. This proved a wise investment, as we spent plenty of time fighting AsyncShutdown-related fires using this dashboard.

In addition to the “clean shutdown” mechanism provided by AsyncShutdown, we also now have the Shutdown Terminator. This is a watchdog subsystem, launched during shutdown, and it ensures that, no matter what, Firefox always eventually shuts down. I am waiting for data from our Crash Scene Investigators to tell us how often we need this watchdog in practice.

Community

I lost track of how many code contributors I interacted with during the quarter, but that represents hundreds of e-mails, as well as countless hours on IRC and Bugzilla, and a few hours on ask.mozilla.org. This year’s mozEdu teaching is also looking good.

We also launched FirefoxOS in France, with big success. I found myself in a supermarket, presenting the ZTE Open C and the activities of Mozilla to the crowds, and this was a pleasing experience.

For Q4, expect more mozEdu, more mentoring, and more sleepless hours helping contributors debug their patches🙂

Where Am I?

You are currently browsing the JavaScript category at Il y a du thé renversé au bord de la table.