Season 1 Episode 2 – The Fight for File I/O

April 2, 2014 § Leave a comment

Plot Our heroes set out for the first battle. Session Restore’s file I/O was clearly inefficient. Not only was it performing redundant operations, but also it was blocking the main thread doing so. The time had come to take it back. Little did our heroes know that the forces of Regression were lurking and that real battle would be fought long after the I/O had been rewritten and made non-blocking.

For historical reasons, some of Session Restore’s File I/O was quite inefficient. Reading and backing up were performed purely on the main thread, which could cause multi-second pauses in extreme cases, and 100ms+ pauses in common cases. Writing was done mostly off the main thread, but the underlying library used caused accidental main thread I/O, with the same effect, and disk flushing. Disk flushing is extremely inefficient on most operating systems and can quickly bring the whole system to its knees, so needs to be avoided.

Fortunately, OS.File, the (then) new JavaScript library designed to provide off main thread I/O had just become available. Turning Session Restore’s I/O into OS.File-based off main thread I/O was surprisingly simple, and even contributed to make the relevant fragments of the code more readable.

In addition to performing main thread I/O and flushing, Session Restore’s I/O had several immediate weaknesses. One of the weaknesses was due to its crash detection mechanism, that required Session Restore to rewrite sessionstore.js immediately after startup, just to store a boolean indicating that we had not crashed. Recall that the largest sessionsstore.js known to this date weighs 150+Mb, and that 1Mb+ instances represented ~5% of our users. Rewriting all this data (and blocking startup while doing so) for a simple boolean flag was clearly unacceptable. We fixed this issue by separating the crash detection mechanism into its own module and ensuring that it only needed to write a few bytes. Another weakness was due to the backup code, which required a full (and inefficient) copy during startup, whereas a simple renaming would have been sufficient.

Having fixed all of this, we were happy. We were wrong.

Speed improvements?

Sadly, Telemetry archives do not reach back far enough to let me provide data confirming any speed improvement. Note for future perf developers including future self: backup your this data or blog immediately before The Cloud eats it.

As for measuring the effects of a flush, at the moment, we do not have a good way to do this, as the main impact is not on the process itself but on the whole system. The best we can do is measure the total number of flushes, but that doesn’t really help.

Full speed… backwards?

The first indication that something was wrong was a large increase in Telemetry measure SESSIONRESTORED, which measures the total amount of time between the launch of the browser and the moment Session Restore has completed initialization. After a short period of bafflement, we concluded that this increase was normal and was due to a change of initialization order – indeed, since OS.File I/O was executed off the main thread, the results of reading the sessionstore.js file could only be received once the main thread was idle and could receive messages from other threads. While this interpretation was partly correct, it masked a very real problem that we only detected much later. Additionally, during our refactorings, we changed the instant at which Session Restore initialization was executed, which muddled the waters even further.

The second indication arrived much later, when the Metrics team extracted Firefox Health Report data from released versions and got in touch with the Performance team to inform us of a large regression in firstPaint-to-sessionRestored time. For most of our users, Firefox was now taking more than 500ms more to load, which was very bad.

After some time spent understanding the data, attempting to reproduce the measure and bisecting to find out at which changeset the regression had taken place, as well as instrumenting code with additional performance probes, we finally concluded that the problem was due to our use I/O thread, the “SessionWorker”. More precisely, this thread was very slow to launch during startup. Digging deeper, we concluded that the problem was not in the code of the SessionWorker itself, but that the loading of the underlying thread was simply too slow. More precisely, loading was fine on a first run, but on second run, disk I/O contention between the resources required by the worker (the cache for the source code of SessionWorker and its dependencies) and the resources required by the rest of the browser (other source code, but also icons, translation files, etc) slowed down things considerably. Replacing the SessionWorker by a raw use of OS.File would not have improved the situation – ironically, just as the SessionWorker, our fast I/O library was loading slowly because of slow file I/O. Further measurements indicated that this slow loading could take up to 6 seconds in extreme cases, with an average of 340ms.

Once the problem had been identified, we could easily develop a stopgap fix to recover most of the regression. We kept OS.File-based writing, as it was not in cause, but we fell back to NetUtil-based loading, which did not require a JavaScript Worker. According to Firefox Health Report, this returned us to a level close to what we had prior to our changes, although we are still worse by 50-100ms. We are still attempting to find out what causes this regression and whether this regression was indeed caused by our work.

With this stopgap fix in place, we set out to provide a longer-term fix, in the form of a reimplementation of OS.File.read(), the critical function used during startup, that did not need to boot a JavaScript worker to proceed. This second implementation was written in C++ and had a number of additional side-improvements, such as the ability to decode strings off the main thread, and transmit them to the main thread at no cost.

The patch using the new version of OS.File.read() has landed a few days ago. We are still in the process of trying to make sense of Telemetry numbers. While Telemetry indicates that the total time to read and decode the file has considerably increased, the total time between the start of the read and the time we finish startup seems to have decreased nicely by .5 seconds (75th percentile) to 4 seconds (95th percentile). We suspect that we are confronted to yet another case in which concurrency makes performance measurement more difficult.

Shutdown duration?

We have not attempted to measure the duration of shutdown-time I/O at the moment.

Losing data or privacy

By definition, since we write data asynchronously, we never wait until the write is complete before proceeding. In most cases, this is not a problem. However, process shutdown may interrupt the write during its execution. While the APIs we use to write the data ensure that shutdown will never cause a file to be partially written, it may cause us to lose the final write, i.e. 15 seconds of browsing, working, etc. To make things slightly worse, the final write of Session Restore is special, insofar as it removes some information that is considered somewhat privacy-sensitive and that is required for crash recovery but not for a clean restart. The risk already existed before our refactoring, but was increased by our work, as the new I/O model was based on JavaScript workers, which are shutdown earlier than the mechanism previously used, and without ensuring that their work is complete.

While we received no reports of bugs caused by this risk, we solved the issue by plugging Session Restore’s shutdown into AsyncShutdown.

Changing the back-end

One of our initial intuitions when starting with this work was that the back-end format used to store session data (large JSON file) was inefficient and needed to be changed. Before doing so, however, we instrumented the relevant code carefully. As it turns out, we could indeed gain some performance by improving the back-end format, but this would be a relatively small win in comparison with everything else that we have done.

We have several possible designs for a new back-end, but we have decided not to proceed for the time being, as there are still larger gains to be obtained with simpler changes. More on this in future blog entries.

Epilogue

Before setting out on this quest, we were already aware that performance refactorings were often more complex than they appeared. Our various misadventures have confirmed it. I strongly believe that, by changing I/O, we have improved the performance of Session Restore in many ways. Unfortunately, I cannot prove that we have improved runtime (because old data has disappeared), and we are still not certain that we have not regressed start-up.

If there are lessons to be learned, it is that:

  • there is no performance work without performance measurements;
  • once your code is sophisticated enough, measuring and understanding the results is much harder than improving performance.

On the upside, all this work has succeeded at:

  • improving our performance measurements of many points of Session Restore;
  • finding out weaknesses of ChromeWorkers and fixing some of these;
  • finding out weaknesses of OS.File and fixing some of these;
  • fixing Session Restore’s backup code that consumed resources and didn’t really do much useful;
  • avoiding unnecessary performance refactorings where they would not have helped.

The work on improving Session Restore file I/O is still ongoing. For one thing, we are still waiting for confirmation that our latest round of optimizations does not cause unwanted regressions. Also, we are currently working on Talos benchmarks and Telemetry measurements to let us catch such regressions earlier.

This work has also spawned other works for other teams on improving the performance of ChromeWorkers’ startup and communication speed.

In the next episode

Drama. Explosions. Asynchronicity. Electrolysis. And more.

The Battle of Session Restore – Pilot

March 26, 2014 § 7 Comments

Plot Our heroes received their assignment. They had to go deep into the Perflines, in the long lost territory of Session Restore, and do whatever it took to get Session Restore back into Perfland. However, they quickly realized that they had been sent on a mission without return – and without a map. This is their tale.

Session Restore is a critical component of Firefox. This component records the current state of your browser to ensure that you can always resume browsing without losing the state of your browser, even if Firefox crashes, if your computer loses power, or if your browser is being upgraded. Unfortunately, we have had many reports of Session Restore slowing down Firefox. In February 2013, a two person Perf/Fx-team task force started working on the Performance of Session Restore. This task force eventually grew to four persons from Perf, Fx-team and e10s, along with half a dozen of punctual contributors.

To this day, the effort has lasted 13 months. In this series of blg entries, I intend to present our work, our results and, more importantly, the lessons we have learnt along the way, sometimes painfully.

Fixing yes, but fixing what?

We had reports of Session Restore blocking Firefox for several seconds every 15 seconds, which made Firefox essentially useless.

The job of Session Restore is to record everything possible of the state of the current browsing session. This means the list of windows, the list of tabs, the current address of each tab, but also the history of each tab, scroll position, anchors, DOM SessionStorage. session cookies, etc. Oh, and this goes recursively for both nested frames and history. All of this is saved to a JSON-formatted file called sessionstore.js, every 15 seconds of user activity. To this day, the largest reported sessionstore.js files is 150Mb, but Telemetry indicates that 95% of users used to have a file of less than 1Mb (numbers are lower these days, after we spent time eliminating unnecessary data from sessionstore.js).

We started the effort to fix Session Restore from only a few bug reports:

 

  • sometimes, users lost sessionstore.js data;
  • sometimes, data collection took ages.

Unfortunately, we had no data on:

 

  • the size of the file;
  • the actual duration of data collection;
  • how long it took to write data to the disk.

To complicate things further, Session Restore had been left without owner for several years. Irregular patching to support new features of the web and new configurations had progressively turned the code and data structures into a mess that nobody fully understood.

We had, however, a few hints:

  • Session Restore needs to collect lots of data;
  • Session Restore had been designed a long time ago, for users with few tabs, and when web pages stored very little information;
  • serializing and writing to JSON is inefficient;
  • in bad cases, saving could take several seconds;
  • the collection of data was purely monolithic;
  • reading and writing data was done entirely on the main thread, which was a very bad thing to do;
  • the client API caused full recollections at each request;
  • the data structure used by Session Restore had progressively become an undocumented mess.

While there were a number of obvious sources of inefficiency that we could fix without further data, and that we set out to fix immediately. In a few cases, however, we found out the hard way that optimizing without hard data is a time-consuming and useless exercise. Consequently, a considerable part of our work has been to use Telemetry to determine where we could best apply our optimization effort, and to confirm that this effort yielded results. In many cases, this meant adding coarse-grained probes, then progressively completing them with finer-grained probes, in parallel with actually writing optimizations.

To be continued…

In the next episode, our heroes will fight Main Thread File I/O… and the consequences of removing it.

Is my data on the disk? Safety properties of OS.File.writeAtomic

February 5, 2014 § 1 Comment

If you have been writing front-end or add-on code recently, chances are that you have been using library OS.File and, in particular, OS.File.writeAtomic to write files. (Note: If you have been writing files without using OS.File.writeAtomic, chances are that you are doing something wrong that will cause Firefox to jank – please don’t.) As the name implies, OS.File.writeAtomic will make efforts to write your data atomically, so as to ensure its survivability in case of crash, power loss, etc.

However, you should not trust this function blindly, because it has its limitations. Let us take a look at exactly what the guarantees provided by writeAtomic.

Algorithm: just write

Snippet OS.File.writeAtomic(path, data)

What it does

  1. reduce the size of the file at path to 0;
  2. send data to the operating system kernel for writing;
  3. close the file.

Worst case scenarios

  1. if the process crashes between 1. and 2. (a few microseconds), the full content of path may be lost;
  2. if the operating system crashes or the computer loses power suddenly before the kernel flushes its buffers (which may happen at any point up to 30 seconds after 1.), the full content of path may be lost;
  3. if the operating system crashes or the computer loses power suddenly while the operating system kernel is flushing (which may happen at any point after 1., typically up to 30 seconds), and if your data is larger than one sector (typically 32kb), data may be written incompletely, resulting in a corrupted file at path.

Performance very good.

Algorithm: write and rename

Snippet OS.File.writeAtomic(path, data, { tmpPath: path + ".tmp" })

What it does

  1. create a new file at tmpPath;
  2. send data to the operating system kernel for writing to tmpPath;
  3. close the file;
  4. rename tmpPath on top of path.

Worst case scenarios

  1. if the process crashes at any moment, nothing is lost, but a file tmpPath may be left on the disk;
  2. if the operating system crashes or the computer loses power suddenly while the operating system kernel is flushing metadata (which may happen at any point after 1., typically up to 30 seconds), the full content of path may be lost;
  3. if the operating system crashes or the computer loses power suddenly while the operating system kernel is flushing (which may happen at any point after 1., typically up to 30 seconds), and if your data is larger than one sector (typically 32kb), data may be written incompletely, resulting in a corrupted file at path.

Performance almost as good as Just Write.

Side-note On the ext4fs file system, the kernel automatically adds a flush, which transparently transforms the safety properties of this operation into those of the algorithm detailed next.

Native equivalent In XPCOM/C++, the mostly-equivalent solution is the atomic-file-output-stream.

Algorithm: write, flush and rename

Use OS.File.writeAtomic(path, data, { tmpPath: path + ".tmp", flush: true })

What it does

  1. create a new file at tmpPath;
  2. send data to the operating system kernel for writing to tmpPath;
  3. close the file;
  4. flush the writing of data to tmpPath;
  5. rename tmpPath on top of path.

Worst case scenarios

  1. if the process crashes at any moment, nothing is lost, but a file tmpPath may be left on the disk;
  2. if the operating system crashes, nothing is lost, but a file tmpPath may be left on the disk;
  3. if the computer loses power suddenly while the hard drive is flushing its internal hardware buffers (which is very hard to predict), nothing is lost, but an incomplete file tmpPath may be left on the disk;.

Performance some operating systems (Windows) or file systems (ext3fs) cannot flush a single file and rather need to flush all the files on the device, which considerably slows down the full operating system. On some others (ext4fs) this operation is essentially free. On some versions of MacOS X, flushing actually doesn’t do anything.

Native equivalent In XPCOM/C++, the mostly-equivalent solution is the safe-file-output-stream.

Algorithm: write, backup, rename

(not landed yet)

Snippet OS.File.writeAtomic(path, data, { tmpPath: path + ".tmp", backupTo: path + ".backup"})

What it does

  1. create a new file at tmpPath;
  2. send data to the operating system kernel for writing to tmpPath;
  3. close the file;
  4. rename the file at path to backupTo;
  5. rename the file at tmpPath on top of path;

Worst case scenarios

  1. if the process crashes between 4. and 5, file path may be lost and backupTo should be used instead for recovery;
  2. if the operating system crashes or the computer loses power suddenly while the operating system kernel is flushing metadata (which may happen at any point after 1., typically up to 30 seconds), the file at path may be empty and backupTo should be used instead for recovery;
  3. if the operating system crashes or the computer loses power suddenly while the operating system kernel is flushing (which may happen at any point after 1., typically up to 30 seconds), and if your data is larger than one sector (typically 32kb), data may be written incompletely, resulting in a corrupted file at path and backupTo should be used instead for recovery;

Performance almost as good as Write and Rename.

Mettez du Mozilla dans vos études !

October 31, 2012 § 1 Comment

Vous êtes étudiant ?

Vous souhaitez défendre la bonne cause, construire l’avenir du web et gagner des crédits pour vos diplômes au passage ?

Alors les Projets Étudiants Mozilla Éducation sont pour vous. Chaque projet est l’occasion de faire avancer le web et le monde dans la bonne direction. Chaque projet est suivi par un mentor (ou plusieurs) de la communauté Mozilla. Pour le moment, vous trouverez des projets de développement et des projets de recherche. N’hésitez pas non plus à suggérer de nouveaux projets !

Nous vous attendons !

Vous êtes enseignant ?

Cette liste est aussi pour vous. Piochez des projets pour vos étudiants, envoyez vos étudiants piocher des projets ou suggérez-nous de nouvelles idées !

Cette liste est conçue pour permettre d’aider vos étudiants à trouver des projets et des encadrants sur des thèmes qui ne sont pas forcément représentés dans votre département d’enseignement ou de recherche. À l’inverse, n’hésitez pas à nous proposer des idées, voire des projets que vous souhaiteriez vous-même encadrer.

Vous êtes un développeur ou une communauté du logiciel libre ?

Nous comptons progressivement étendre cette initiative à de nombreux projets liés non-Mozilla au web. Nous vous tiendrons au courant – ou n’hésitez pas à nous contacter !

Fighting the good fight for fun and credits, with Mozilla Education

October 30, 2012 § Leave a comment

Are you a student?

Do you want to fight the good fight, for the Future of the Web, and earn credits along the way?

Mozilla Education maintains a tracker of student project topics. Each project is followed by one (or more) mentor from the Mozilla Community.

Then what are you waiting for? Come and pick or join a project or get in touch to suggest new ideas!

Are you an educator?

The tracker is also open to you. Do not hesitate to pick projects for your students, send students to us or contact us with project ideas.

We offer/accept both Development-oriented, Research-oriented projects and not-CS-oriented-at-all projects.

Are you an open-source developer/community?

If things work out smoothly, we intend to progressively open this tracker to other (non-Mozilla) projects related to the future of the web. Stay tuned – or contact us!

Beautiful Off Main Thread File I/O

October 18, 2012 § 7 Comments

Now that the main work on Off Main Thread File I/O for Firefox is complete, I have finally found some time to test-drive the combination of Task.js and OS.File. Let me tell you one thing: it rocks!

« Read the rest of this entry »

Call For Classrooms

January 17, 2012 § 1 Comment

(and Researchers, Professors, Teachers, Students …)

Mozilla is working with numerous educators, professors and researchers across the world, both to bring open-source, the open web and web technologies into the classroom, and to bring the contributions of students and their mentors to the world. You can be a part of this, and your field does not have to be Computer Science.

« Read the rest of this entry »

OS.File, step-by-step: The Schedule API

December 13, 2011 § 7 Comments

Summary

One of the key components of OS.File is the Schedule API, a tiny yet powerful JavaScript core designed to considerably simplify the development of asynchronous modules. In this post, we introduce the Schedule API.

Introduction

In a previous post, I introduced OS.File, a Mozilla Platform library designed make the life of developers easier and to help them produce high-performance, high-responsiveness file management routines.

In this post, I would like to concentrate on one of the core items of OS.File: the Schedule API. Note that the Schedule API is not limited to OS.File and is designed to be useful for all sorts of other modules.

« Read the rest of this entry »

Stages chez Mozilla Paris… ou ailleurs

November 19, 2011 § 1 Comment

edit Nous sommes pleins jusqu’à Juin. Nous ne pouvons plus prendre de stagiaires sur Paris dont les stages commencent avant Juin.

Comme tous les ans, Mozilla propose des stages en informatique, orientés Développement, R&D ou Recherche. Selon le sujet, le stage peut vous emmener à Paris, aux États-Unis, au Canada, en Chine…

À propos de Mozilla

La Fondation Mozilla est une association à but non-lucratif, fondée pour encourager un Internet ouvert, innovant et participatif. Vous avez probablement entendu parler de Mozilla Firefox, le navigateur open-source qui a réintroduit sur le web les standards ouverts et la sécurité, ou de Mozilla Thunderbird, le client de messagerie multi-plateforme, open-source et extensible. Les activités de Mozilla ne s’arrêtent pas à ces deux produits et se prolongent à de nombreux projets pour le présent et l’avenir, tels que :

  • Boot-to-Gecko, système d’exploitation totalement ouvert et construit par la communauté, pour les téléphones portables, tablettes et autres machines communicantes ;

  • SpiderMonkey, une famille de Machines Virtuelles conçues pour l’analyse statique et dynamique, la compilation et l’exécution des langages web, en particulier JavaScript ;
  • DeHydra et JSHydra, outils d’analyse statique pour les langages C++ et JavaScript ;

  • Rust, un nouveau langage de programmation conçu pour le développement d’applications système parallèles sûres ;

  • WebAPI, un ensemble d’outils qui permettent d’étendre les capacités des applications web au-delà de celles des applications traditionnelles, la sécurité et la confidentialité en plus ;

  • Gecko, le moteur de rendu extensible et portable pour le HTML, le XML et les interfaces graphiques, qui a permis Firefox, Thunderbird et de nombreuses autres applications ;

  • BrowserID, une technique innovante qui fournit aux utilisateurs et aux développeurs les outils cryptographiques pour assurer l’identification sur le web, sans compromettre la vie privée, la simplicité ou la sécurité ;

  • les fonctionnalités Mozilla Services de gestion d’identité par le Cloud ;

  • et d’autres encore…

À propos de vous

Mozilla proposes plusieurs stages dans ses installations à travers le monde sur de nombreux sujets.

Votre profil :

  • vous voulez faire du web un endroit meilleur, sur lequel chacun peut naviguer et contribuer en toute sécurité, sans avoir à craindre pour sa sécurité ou sa vie privée ;
  • vous souhaitez prendre part à un projet utilisé par plus de 33% de la population du web ;
  • vous voulez que votre travail soit utile à tous et visible par tous ;
  • vous avez de fortes compétences en Algorithmique et en Informatique ;
  • vous avez de fortes compétences dans au moins l’un des domaines suivants :
    • systèmes d’exploitation ;
    • réseaux ;
    • géométrie algorithmique ;
    • compilation ;
    • cryptographie ;
    • analyse statique ;
    • langages de programmation ;
    • extraire des informations pertinentes à partir de données exotiques ;
    • algorithmique distribuée ;
    • le web en tant que plate-forme ;
    • interactions avec les communautés du logiciel libre ;
    • toute autre compétence qui, à votre avis, pourrait nous servir.
  • sur certains sujets, un excellent niveau d’Anglais peut être indispensable ;
  • les stages sont généralement prévus pour des étudiants M1 ou M2 mais si vous arrivez à nous impressionner par vos réalisations ou par vos connaissances, le diplôme n’est pas indispensable.

Si vous vous reconnaissez, nous vous invitons à nous contacter. En fonction du sujet, les stages peuvent vous emmener à Paris, Mountain View, San Francisco, Toronto, Taipei, ou d’autres lieux à travers le monde.

Les meilleurs stagiaires peuvent espérer un contrat freelance, un CDI ou/et une bourse de doctorat.

Pour nous contacter

Pour toute question, contactez :

  • pour tout ce qui concerne les stages chez Mozilla, Julie Deroche (à mozilla.com, jderoche) – Mozilla Mountain View, College Recruiting ;
  • pour les stages à Paris, David Rajchenbach-Teller (à mozilla.com, dteller) – Mozilla Paris, Développeur / Chercheur.

JavaScript, this static language (part 1)

October 20, 2011 § 7 Comments

tl;dr

JavaScript is a dynamic language. However, by borrowing a few pages from static languages – and a few existing tools – we can considerable improve reliability and maintainability.

« Writing one million lines of code of JavaScript is simply impossible »

(source: speaker in a recent open-source conference)

JavaScript is a dynamic language – a very dynamic one, in which programs can rewrite themselves, objects may lose or gain methods through side-effects on themselves on on their prototypes, and, more generally, nothing is fixed.

And dynamic languages are fun. They make writing code simple and fast. They are vastly more suited to prototyping than static languages. Dynamism also makes it possible to write extremely powerful tools that can perform JIT translation from other syntaxes, add missing features to existing classes and functions and more generally fully customize the experience of the developer.

Unfortunately, such dynamism comes with severe drawbacks. Safety-minded developers will tell you that, because of this dynamism, they simply cannot trust any snippet, as this snippet may behave in a manner that does not match its source code. They will conclude that you cannot write safe, or even modular, applications in JavaScript.

Many engineering-minded developers will also tell you that they simply cannot work in JavaScript, and they will not have much difficulty finding examples of situations in which the use of a dynamic language in a complex project can, effectively, kill the project. If you do not believe them, consider a large codebase, and the (rather common) case of a large transversal refactoring, for instance to replace an obsolete API by a newer one. Do this in Java (or, even better, in a more modern mostly-static language such as OCaml, Haskell, F# or Scala), and you can use the compiler to automatically and immediately spot any place where the API has not been updated, and will spot a number of errors that you may have made with the refactoring. Even better, if the API was designed to be safe-by-design, the compiler will automatically spot even complex errors that you may have done during refactoring, including calling functions/methods in the wrong order, or ownership errors. Do the same in JavaScript and, while your code will be written faster, you should expect to be hunting bugs weeks or even months later.

I know that the Python community has considerably suffered from such problems during version transitions. I am less familiar with the world of PHP, but I believe this is no accident that Facebook is progressively arming itself with PHP static analysis tools. I also believe that this is no accident that Google is now introducing a typed language as a candidate replacement for JavaScript.

That is because today is the turn of JavaScript, or if not today, surely tomorrow. I have seen applications consisting in hundreds of thousands of lines of JavaScript. And if just maintaining these applications is not difficult enough, the rapid release cycles of both  Mozilla and Chrome, mean that external and internal APIs are now changing every six weeks. This means breakage. And, more precisely, this means that we need new tools to help us predict breakages and help developers (both add-on developers and browser contributors) react before these breakages hit their users.

So let’s do something about it. Let’s make our JavaScript a strongly, statically typed language!

Or let’s do something a little smarter.

JavaScript, with discipline

At this point, I would like to ask readers to please kindly stop preparing tar and feathers for me. I realize fully that JavaScript is a dynamic language and that turning it into a static language will certainly result in something quite disagreeable to use. Something that is verbose, has lost most of the power of JavaScript, and gained no safety guarantees.

Trust me on this, there is a way to obtain the best of both worlds, without sacrificing anything. Before discussing the manner in which we can attain this, let us first set objectives that we can hope to achieve with a type-disciplined JavaScript.

Finding errors

The main benefit of strong, static typing, is that it helps find errors.

  • Even the simplest analyses can find all syntax errors, all unbound variables, all variables bound several times and consequently almost all scoping errors, which can already save considerable time for developers. Such an analysis requires no human intervention from the developer besides, of course, fixing any error that has been thus detected. As a bonus, in most cases, the analysis can suggest fixes.
  • Similarly trivial forms of analysis can also detect suspicious calls to break or continue, weird uses of switch(), suspicious calls to private fields of objects, as well as suspicious occurrences of eval – in my book, eval is always suspicious.
  • Slightly more sophisticated analyses can find most occurrences of functions or methods invoked with the wrong number of arguments. Again, this is without human intervention. With type annotations/documentation, we can move from most occurrences to all occurrences.
  • This same analysis, when applied to public APIs, can provide developers with more informations regarding how their code can be (mis)used.
  • At the same level of complexity, analysis can find most erroneous access to fields/methods, suspicious array traversals, suspicious calls to iterators/generators, etc. Again, with type annotations/documentation, we can move from most to all.
  • Going a little further in complexity, analysis can find fragile uses of this, uncaught exceptions, etc.

Types as documentation

Public APIs must be documented. This is true in any language, no matter where it stands on the static/dynamic scale. In static languages, one may observe how documentation generation tools insert type information, either from annotations provided by the user (as in Java/JavaDoc) or from type information inferred by the compiler (as in OCaml/OCamlDoc). But look at the documentation of Python, Erlang or JavaScript libraries and you will find the exact same information, either clearly labelled or hidden somewhere in the prose: every single value/function/method comes with a form of type signature, whether formal or informal.

In other words, type information is a critical piece of documentation. If JavaScript developers provide explicit type annotations along with their public APIs, they have simply advanced the documentation, not wasted time. Even better, if such type can be automatically inferred from the source code, this piece of documentation can be automatically written by the type-checker.

Types as QA metric

While disciples of type-checking tend to consider typing as something boolean, the truth is more subtle: it quite possible that one piece of code does not pass type-checking while the rest of the code does. Indeed, with advanced type systems that do not support decidable type inference, this is only to be expected.

The direct consequence is that type-checking can be seen as a spectrum of quality. A code can be seen as failing if the static checking phrase can detect evident errors, typically unbound values or out-of-scope break, continue, etc. Otherwise, every attempt to type a value that results in a type error is a hint of poor QA practice that can be reported to the developer. This yields a percentage of values that can be typed – obtain 100% and get a QA stamp of approval for this specific metric.

Typed JavaScript, in practice

Most of the previous paragraphs are already possible in practice, with existing tools. Indeed, I have personally experienced using JavaScript static type checking as a bug-finding tool and a QA metric. On the first day, this technique has helped me find both plenty of dead code and 750+ errors, with only a dozen false positives.

For this purpose, I have used Google’s Closure Compiler. This tool detects errors, supports a simple vocabulary for documentation/annotations, fails only if very clear errors are detected (typically syntax errors) and provides as metric a percentage of well-typed code. It does not accept JavaScript 1.7 yet, unfortunately, but this can certainly be added.

I also know of existing academic work to provide static type-checking for JavaScript, although I am unsure as to the maturity of such works.

Finally, Mozilla is currently working on a different type inference mechanism for JavaScript. While this mechanism is not primarily aimed at finding errors, my personal intuition is that it may be possible to repurpose it.

What’s next?

I hope that I have convinced you of the interest of investigating manners of introducing static, strong type-checking to JavaScript. In a second part, I will detail how and where I believe that this can be done in Mozilla.

Where Am I?

You are currently browsing the Recherche / Research category at Il y a du thé renversé au bord de la table.

Follow

Get every new post delivered to your Inbox.

Join 25 other followers