The Battle of Session Restore – Season 1 Episode 3 – All With Measure

July 17, 2014 § 2 Comments

Plot For the second time, our heroes prepared for battle. The startup of Firefox was too slow and Session Restore was one of the battle fields.

When Firefox starts, Session Restore is in charge of restoring the browser to its previous state, in case of a crash, a restart, or for the users who have configured Firefox to resume from its previous state. This entails numerous activities during startup:

  1. read sessionstore.js from disk, decode it and parse it (recall that the file is potentially several Mb large), handling errors;
  2. backup sessionstore.js in case of startup crash.
  3. create windows, tabs, frames;
  4. populate history, scroll position, forms, session cookies, session storage, etc.

It is common wisdom that Session Restore must have a large impact on Firefox startup. But before we could minimize this impact, we needed to measure it.

Benchmarking is not easy

When we first set foot on Session Restore territory, the contribution of that module to startup duration was uncharted. This was unsurprising, as this aspect of the Firefox performance effort was still quite young. To this day, we have not finished chartering startup or even Session Restore’s startup.

So how do we measure the impact of Session Restore on startup?

A first tool we use is Timeline Events, which let us determine how long it takes to reach a specific point of startup. Session Restore has had events `sessionRestoreInitialized` and `sessionRestored` for years. Unfortunately, these events did not tell us much about Session Restore itself.

The first serious attempt at measuring the impact of Session Restore on startup Performance was actually not due to the Performance team but rather to the metrics team. Indeed, data obtained through Firefox Health Report participants indicated that something wrong had happened.

Oops, something is going wrong

Indicator `d2` in the graph measures the duration between `firstPaint` (which is the instant at which we start displaying content in our windows) and `sessionRestored` (which is the instant at which we are satisfied that Session Restore has opened its first tab). While this measure is imperfect, the dip was worrying – indeed, it represented startups that lasted several seconds longer than usual.

Upon further investigation, we concluded that the performance regression was indeed due to Session Restore. While we had not planned to start optimizing the startup component of Session Restore, this battle was forced upon us. We had to recover from that regression and we had to start monitoring startup much better.

A second tool is Telemetry Histograms for measuring duration of individual operations, such as reading sessionstore.js or parsing it. We progressively added measures for most of the operations of Session Restore. While these measures are quite helpful, they are also unfortunately very unstable in real-world conditions, as they are affected both by scheduling (the operations are asynchronous), by the work load of the machine, by the actual contents of sessionstore.js, etc.

The following graph displays the average duration of reading and decoding sessionstore.js among Telemetry participants: Telemetry 4

Difference in colors represent successive versions of Firefox. As we can see, this graph is quite noisy, certainly due to the factors mentioned above (the spikes don’t correspond to any meaningful change in Firefox or Session Restore). Also, we can see a considerable increase in the duration of the read operation. This was quite surprising for us, given that this increase corresponds to the introduction of a much faster, off the main thread, reading and decoding primitive. At the time, we were stymied by this change, which did not correspond to our experience. We have now concluded that by changing the asynchronous operation used to read the file, we have simply changed the scheduling, which makes the operation appear longer, while in practice it simply does not block the rest of the startup from taking place on another thread.

One major tool was missing for our arsenal: a stable benchmark, always executed on the same machine, with the same contents of sessionstore.js, and that would let us determine more exactly (almost daily, actually) the impact of our patches upon Session Restore:Session Restore Talos

This test, based on our Talos benchmark suite, has proved both to be very stable, and to react quickly to patches that affected its performance. It measures the duration between the instant at which we start initializing Session Restore (a new event `sessionRestoreInit`) and the instant at which we start displaying the results (event `sessionRestored`).

With these measures at hand, we are now in a much better position to detect performance regressions (or improvements) to Session Restore startup, and to start actually working on optimizing it – we are now preparing to using this suite to experiment with “what if” situations to determine which levers would be most useful for such an optimization work.

Evolution of startup duration

Our first benchmark measures the time elapsed between start and stop of Session Restore if the user has requested all windows to be reopened automatically

restoreAs we can see, the performance on Linux 32 bits, Windows XP and Mac OS 10.6 is rather decreasing, while the performance on Linux 64 bits, Windows 7 and 8 and MacOS 10.8 is improving. Since the algorithm used by Session Restore upon startup is exactly the same for all platforms, and since “modern” platforms are speeding up while “old” platforms are slowing down, this suggests that the performance changes are not due to changes inside Session Restore. The origin of these changes is unclear. I suspect the influence of newer versions of the compilers or some of the external libraries we use, or perhaps new and improved (for some platforms) gfx.

Still, seeing the modern platforms speed up is good news. As of Firefox 31, any change we make that causes a slowdown of Session Restore will cause an immediate alert so that we can react immediately.

Our second benchmark measures the time elapsed if the user does not wish windows to be reopened automatically. We still need to read and parse sessionstore.js to find whether it is valid, so as to decide whether we can show the “Restore” button on about:home.

norestoreWe see peaks in Firefox 27 and Firefox 28, as well as a slight decrease of performance on Windows XP and Linux. Again, in the future, we will be able to react better to such regressions.

The influence of factors upon startup

With the help of our benchmarks, we were able to run “what if” scenarios to find out which of the data manipulated by Session Restore contributed to startup duration. We did this in a setting in which we restore windows:size-restore

and in a setting in which we do not:

size-norestore

Interestingly, increasing the size of sessionstore.js has apparently no influence on startup duration. Therefore, we do not need to optimize reading and parsing sessionstore.js. Similarly, optimizing history, cookies or form data would not gain us anything.

The single largest most expensive piece of data is the set of open windows – interestingly, this is the case even when we do not restore windows. More precisely, any optimization should target, by order of priority:

  1. the cost of opening/restoring windows;
  2. the cost of opening/restoring tabs;
  3. the cost of dealing with windows data, even when we do not restore them.

What’s next?

Now that we have information on which parts of Session Restore startup need to be optimized, the next step is to actually optimize them. Stay tuned!

Shutting down Asynchronously, part 2

May 26, 2014 § Leave a comment

During shutdown of Firefox, subsystems are closed one after another. AsyncShutdown is a module dedicated to express shutdown-time dependencies between:

  • services and their clients;
  • shutdown phases (e.g. profile-before-change) and their clients.

Barriers: Expressing shutdown dependencies towards a service

Consider a service FooService. At some point during the shutdown of the process, this service needs to:

  • inform its clients that it is about to shut down;
  • wait until the clients have completed their final operations based on FooService (often asynchronously);
  • only then shut itself down.

This may be expressed as an instance of AsyncShutdown.Barrier. An instance of AsyncShutdown.Barrier provides:

  • a capability client that may be published to clients, to let them register or unregister blockers;
  • methods for the owner of the barrier to let it consult the state of blockers and wait until all client-registered blockers have been resolved.

Shutdown timeouts

By design, an instance of AsyncShutdown.Barrier will cause a crash if it takes more than 60 seconds awake for its clients to lift or remove their blockers (awake meaning that seconds during which the computer is asleep or too busy to do anything are not counted). This mechanism helps ensure that we do not leave the process in a state in which it can neither proceed with shutdown nor be relaunched.

If the CrashReporter is enabled, this crash will report: – the name of the barrier that failed; – for each blocker that has not been released yet:

  • the name of the blocker;
  • the state of the blocker, if a state function has been provided (see AsyncShutdown.Barrier.state).

Example 1: Simple Barrier client

The following snippet presents an example of a client of FooService that has a shutdown dependency upon FooService. In this case, the client wishes to ensure that FooService is not shutdown before some state has been reached. An example is clients that need write data asynchronously and need to ensure that they have fully written their state to disk before shutdown, even if due to some user manipulation shutdown takes place immediately.

// Some client of FooService called FooClient

Components.utils.import("resource://gre/modules/FooService.jsm", this);

// FooService.shutdown is the `client` capability of a `Barrier`.
// See example 2 for the definition of `FooService.shutdown`
FooService.shutdown.addBlocker(
  "FooClient: Need to make sure that we have reached some state",
  () => promiseReachedSomeState
);
// promiseReachedSomeState should be an instance of Promise resolved once
// we have reached the expected state

Example 2: Simple Barrier owner

The following snippet presents an example of a service FooService that wishes to ensure that all clients have had a chance to complete any outstanding operations before FooService shuts down.

    // Module FooService

    Components.utils.import("resource://gre/modules/AsyncShutdown.jsm", this);
    Components.utils.import("resource://gre/modules/Task.jsm", this);

    this.exports = ["FooService"];

    let shutdown = new AsyncShutdown.Barrier("FooService: Waiting for clients before shutting down");

    // Export the `client` capability, to let clients register shutdown blockers
    FooService.shutdown = shutdown.client;

    // This Task should be triggered at some point during shutdown, generally
    // as a client to another Barrier or Phase. Triggering this Task is not covered
    // in this snippet.
    let onshutdown = Task.async(function*() {
      // Wait for all registered clients to have lifted the barrier
      yield shutdown.wait();

      // Now deactivate FooService itself.
      // ...
    });

Frequently, a service that owns a AsyncShutdown.Barrier is itself a client of another Barrier.

 

Example 3: More sophisticated Barrier client

The following snippet presents FooClient2, a more sophisticated client of FooService that needs to perform a number of operations during shutdown but before the shutdown of FooService. Also, given that this client is more sophisticated, we provide a function returning the state of FooClient2 during shutdown. If for some reason FooClient2’s blocker is never lifted, this state can be reported as part of a crash report.

    // Some client of FooService called FooClient2

    Components.utils.import("resource://gre/modules/FooService.jsm", this);

    FooService.shutdown.addBlocker(
      "FooClient2: Collecting data, writing it to disk and shutting down",
      () => Blocker.wait(),
      () => Blocker.state
    );

    let Blocker = {
      // This field contains information on the status of the blocker.
      // It can be any JSON serializable object.
      state: "Not started",

      wait: Task.async(function*() {
        // This method is called once FooService starts informing its clients that
        // FooService wishes to shut down.

        // Update the state as we go. If the Barrier is used in conjunction with
        // a Phase, this state will be reported as part of a crash report if FooClient fails
        // to shutdown properly.
        this.state = "Starting";

        let data = yield collectSomeData();
        this.state = "Data collection complete";

        try {
          yield writeSomeDataToDisk(data);
          this.state = "Data successfully written to disk";
        } catch (ex) {
          this.state = "Writing data to disk failed, proceeding with shutdown: " + ex;
        }

        yield FooService.oneLastCall();
        this.state = "Ready";
      }.bind(this)
    };

Example 4: A service with both internal and external dependencies

    // Module FooService2

    Components.utils.import("resource://gre/modules/AsyncShutdown.jsm", this);
    Components.utils.import("resource://gre/modules/Task.jsm", this);
    Components.utils.import("resource://gre/modules/Promise.jsm", this);

    this.exports = ["FooService2"];

    let shutdown = new AsyncShutdown.Barrier("FooService2: Waiting for clients before shutting down");

    // Export the `client` capability, to let clients register shutdown blockers
    FooService2.shutdown = shutdown.client;

    // A second barrier, used to avoid shutting down while any connections are open.
    let connections = new AsyncShutdown.Barrier("FooService2: Waiting for all FooConnections to be closed before shutting down");

    let isClosed = false;

    FooService2.openFooConnection = function(name) {
      if (isClosed) {
        throw new Error("FooService2 is closed");
      }

      let deferred = Promise.defer();
      connections.client.addBlocker("FooService2: Waiting for connection " + name + " to close",  deferred.promise);

      // ...


      return {
        // ...
        // Some FooConnection object. Presumably, it will have additional methods.
        // ...
        close: function() {
          // ...
          // Perform any operation necessary for closing
          // ...

          // Don't hoard blockers.
          connections.client.removeBlocker(deferred.promise);

          // The barrier MUST be lifted, even if removeBlocker has been called.
          deferred.resolve();
        }
      };
    };


    // This Task should be triggered at some point during shutdown, generally
    // as a client to another Barrier. Triggering this Task is not covered
    // in this snippet.
    let onshutdown = Task.async(function*() {
      // Wait for all registered clients to have lifted the barrier.
      // These clients may open instances of FooConnection if they need to.
      yield shutdown.wait();

      // Now stop accepting any other connection request.
      isClosed = true;

      // Wait for all instances of FooConnection to be closed.
      yield connections.wait();

      // Now finish shutting down FooService2
      // ...
    });

Phases: Expressing dependencies towards phases of shutdown

The shutdown of a process takes place by phase, such as: – profileBeforeChange (once this phase is complete, there is no guarantee that the process has access to a profile directory); – webWorkersShutdown (once this phase is complete, JavaScript does not have access to workers anymore); – …

Much as services, phases have clients. For instance, all users of web workers MUST have finished using their web workers before the end of phase webWorkersShutdown.

Module AsyncShutdown provides pre-defined barriers for a set of well-known phases. Each of the barriers provided blocks the corresponding shutdown phase until all clients have lifted their blockers.

List of phases

AsyncShutdown.profileChangeTeardown

The client capability for clients wishing to block asynchronously during observer notification “profile-change-teardown”.

AsyncShutdown.profileBeforeChange

The client capability for clients wishing to block asynchronously during observer notification “profile-change-teardown”. Once the barrier is resolved, clients other than Telemetry MUST NOT access files in the profile directory and clients MUST NOT use Telemetry anymore.

AsyncShutdown.sendTelemetry

The client capability for clients wishing to block asynchronously during observer notification “profile-before-change2”. Once the barrier is resolved, Telemetry must stop its operations.

AsyncShutdown.webWorkersShutdown

The client capability for clients wishing to block asynchronously during observer notification “web-workers-shutdown”. Once the phase is complete, clients MUST NOT use web workers.

Is my data on the disk? Safety properties of OS.File.writeAtomic

February 5, 2014 § 1 Comment

If you have been writing front-end or add-on code recently, chances are that you have been using library OS.File and, in particular, OS.File.writeAtomic to write files. (Note: If you have been writing files without using OS.File.writeAtomic, chances are that you are doing something wrong that will cause Firefox to jank – please don’t.) As the name implies, OS.File.writeAtomic will make efforts to write your data atomically, so as to ensure its survivability in case of crash, power loss, etc.

However, you should not trust this function blindly, because it has its limitations. Let us take a look at exactly what the guarantees provided by writeAtomic.

Algorithm: just write

Snippet OS.File.writeAtomic(path, data)

What it does

  1. reduce the size of the file at path to 0;
  2. send data to the operating system kernel for writing;
  3. close the file.

Worst case scenarios

  1. if the process crashes between 1. and 2. (a few microseconds), the full content of path may be lost;
  2. if the operating system crashes or the computer loses power suddenly before the kernel flushes its buffers (which may happen at any point up to 30 seconds after 1.), the full content of path may be lost;
  3. if the operating system crashes or the computer loses power suddenly while the operating system kernel is flushing (which may happen at any point after 1., typically up to 30 seconds), and if your data is larger than one sector (typically 32kb), data may be written incompletely, resulting in a corrupted file at path.

Performance very good.

Algorithm: write and rename

Snippet OS.File.writeAtomic(path, data, { tmpPath: path + ".tmp" })

What it does

  1. create a new file at tmpPath;
  2. send data to the operating system kernel for writing to tmpPath;
  3. close the file;
  4. rename tmpPath on top of path.

Worst case scenarios

  1. if the process crashes at any moment, nothing is lost, but a file tmpPath may be left on the disk;
  2. if the operating system crashes or the computer loses power suddenly while the operating system kernel is flushing metadata (which may happen at any point after 1., typically up to 30 seconds), the full content of path may be lost;
  3. if the operating system crashes or the computer loses power suddenly while the operating system kernel is flushing (which may happen at any point after 1., typically up to 30 seconds), and if your data is larger than one sector (typically 32kb), data may be written incompletely, resulting in a corrupted file at path.

Performance almost as good as Just Write.

Side-note On the ext4fs file system, the kernel automatically adds a flush, which transparently transforms the safety properties of this operation into those of the algorithm detailed next.

Native equivalent In XPCOM/C++, the mostly-equivalent solution is the atomic-file-output-stream.

Algorithm: write, flush and rename

Use OS.File.writeAtomic(path, data, { tmpPath: path + ".tmp", flush: true })

What it does

  1. create a new file at tmpPath;
  2. send data to the operating system kernel for writing to tmpPath;
  3. close the file;
  4. flush the writing of data to tmpPath;
  5. rename tmpPath on top of path.

Worst case scenarios

  1. if the process crashes at any moment, nothing is lost, but a file tmpPath may be left on the disk;
  2. if the operating system crashes, nothing is lost, but a file tmpPath may be left on the disk;
  3. if the computer loses power suddenly while the hard drive is flushing its internal hardware buffers (which is very hard to predict), nothing is lost, but an incomplete file tmpPath may be left on the disk;.

Performance some operating systems (Windows) or file systems (ext3fs) cannot flush a single file and rather need to flush all the files on the device, which considerably slows down the full operating system. On some others (ext4fs) this operation is essentially free. On some versions of MacOS X, flushing actually doesn’t do anything.

Native equivalent In XPCOM/C++, the mostly-equivalent solution is the safe-file-output-stream.

Algorithm: write, backup, rename

(not landed yet)

Snippet OS.File.writeAtomic(path, data, { tmpPath: path + ".tmp", backupTo: path + ".backup"})

What it does

  1. create a new file at tmpPath;
  2. send data to the operating system kernel for writing to tmpPath;
  3. close the file;
  4. rename the file at path to backupTo;
  5. rename the file at tmpPath on top of path;

Worst case scenarios

  1. if the process crashes between 4. and 5, file path may be lost and backupTo should be used instead for recovery;
  2. if the operating system crashes or the computer loses power suddenly while the operating system kernel is flushing metadata (which may happen at any point after 1., typically up to 30 seconds), the file at path may be empty and backupTo should be used instead for recovery;
  3. if the operating system crashes or the computer loses power suddenly while the operating system kernel is flushing (which may happen at any point after 1., typically up to 30 seconds), and if your data is larger than one sector (typically 32kb), data may be written incompletely, resulting in a corrupted file at path and backupTo should be used instead for recovery;

Performance almost as good as Write and Rename.

JavaScript, this static language (part 1)

October 20, 2011 § 7 Comments

tl;dr

JavaScript is a dynamic language. However, by borrowing a few pages from static languages – and a few existing tools – we can considerable improve reliability and maintainability.

« Writing one million lines of code of JavaScript is simply impossible »

(source: speaker in a recent open-source conference)

JavaScript is a dynamic language – a very dynamic one, in which programs can rewrite themselves, objects may lose or gain methods through side-effects on themselves on on their prototypes, and, more generally, nothing is fixed.

And dynamic languages are fun. They make writing code simple and fast. They are vastly more suited to prototyping than static languages. Dynamism also makes it possible to write extremely powerful tools that can perform JIT translation from other syntaxes, add missing features to existing classes and functions and more generally fully customize the experience of the developer.

Unfortunately, such dynamism comes with severe drawbacks. Safety-minded developers will tell you that, because of this dynamism, they simply cannot trust any snippet, as this snippet may behave in a manner that does not match its source code. They will conclude that you cannot write safe, or even modular, applications in JavaScript.

Many engineering-minded developers will also tell you that they simply cannot work in JavaScript, and they will not have much difficulty finding examples of situations in which the use of a dynamic language in a complex project can, effectively, kill the project. If you do not believe them, consider a large codebase, and the (rather common) case of a large transversal refactoring, for instance to replace an obsolete API by a newer one. Do this in Java (or, even better, in a more modern mostly-static language such as OCaml, Haskell, F# or Scala), and you can use the compiler to automatically and immediately spot any place where the API has not been updated, and will spot a number of errors that you may have made with the refactoring. Even better, if the API was designed to be safe-by-design, the compiler will automatically spot even complex errors that you may have done during refactoring, including calling functions/methods in the wrong order, or ownership errors. Do the same in JavaScript and, while your code will be written faster, you should expect to be hunting bugs weeks or even months later.

I know that the Python community has considerably suffered from such problems during version transitions. I am less familiar with the world of PHP, but I believe this is no accident that Facebook is progressively arming itself with PHP static analysis tools. I also believe that this is no accident that Google is now introducing a typed language as a candidate replacement for JavaScript.

That is because today is the turn of JavaScript, or if not today, surely tomorrow. I have seen applications consisting in hundreds of thousands of lines of JavaScript. And if just maintaining these applications is not difficult enough, the rapid release cycles of both  Mozilla and Chrome, mean that external and internal APIs are now changing every six weeks. This means breakage. And, more precisely, this means that we need new tools to help us predict breakages and help developers (both add-on developers and browser contributors) react before these breakages hit their users.

So let’s do something about it. Let’s make our JavaScript a strongly, statically typed language!

Or let’s do something a little smarter.

JavaScript, with discipline

At this point, I would like to ask readers to please kindly stop preparing tar and feathers for me. I realize fully that JavaScript is a dynamic language and that turning it into a static language will certainly result in something quite disagreeable to use. Something that is verbose, has lost most of the power of JavaScript, and gained no safety guarantees.

Trust me on this, there is a way to obtain the best of both worlds, without sacrificing anything. Before discussing the manner in which we can attain this, let us first set objectives that we can hope to achieve with a type-disciplined JavaScript.

Finding errors

The main benefit of strong, static typing, is that it helps find errors.

  • Even the simplest analyses can find all syntax errors, all unbound variables, all variables bound several times and consequently almost all scoping errors, which can already save considerable time for developers. Such an analysis requires no human intervention from the developer besides, of course, fixing any error that has been thus detected. As a bonus, in most cases, the analysis can suggest fixes.
  • Similarly trivial forms of analysis can also detect suspicious calls to break or continue, weird uses of switch(), suspicious calls to private fields of objects, as well as suspicious occurrences of eval – in my book, eval is always suspicious.
  • Slightly more sophisticated analyses can find most occurrences of functions or methods invoked with the wrong number of arguments. Again, this is without human intervention. With type annotations/documentation, we can move from most occurrences to all occurrences.
  • This same analysis, when applied to public APIs, can provide developers with more informations regarding how their code can be (mis)used.
  • At the same level of complexity, analysis can find most erroneous access to fields/methods, suspicious array traversals, suspicious calls to iterators/generators, etc. Again, with type annotations/documentation, we can move from most to all.
  • Going a little further in complexity, analysis can find fragile uses of this, uncaught exceptions, etc.

Types as documentation

Public APIs must be documented. This is true in any language, no matter where it stands on the static/dynamic scale. In static languages, one may observe how documentation generation tools insert type information, either from annotations provided by the user (as in Java/JavaDoc) or from type information inferred by the compiler (as in OCaml/OCamlDoc). But look at the documentation of Python, Erlang or JavaScript libraries and you will find the exact same information, either clearly labelled or hidden somewhere in the prose: every single value/function/method comes with a form of type signature, whether formal or informal.

In other words, type information is a critical piece of documentation. If JavaScript developers provide explicit type annotations along with their public APIs, they have simply advanced the documentation, not wasted time. Even better, if such type can be automatically inferred from the source code, this piece of documentation can be automatically written by the type-checker.

Types as QA metric

While disciples of type-checking tend to consider typing as something boolean, the truth is more subtle: it quite possible that one piece of code does not pass type-checking while the rest of the code does. Indeed, with advanced type systems that do not support decidable type inference, this is only to be expected.

The direct consequence is that type-checking can be seen as a spectrum of quality. A code can be seen as failing if the static checking phrase can detect evident errors, typically unbound values or out-of-scope break, continue, etc. Otherwise, every attempt to type a value that results in a type error is a hint of poor QA practice that can be reported to the developer. This yields a percentage of values that can be typed – obtain 100% and get a QA stamp of approval for this specific metric.

Typed JavaScript, in practice

Most of the previous paragraphs are already possible in practice, with existing tools. Indeed, I have personally experienced using JavaScript static type checking as a bug-finding tool and a QA metric. On the first day, this technique has helped me find both plenty of dead code and 750+ errors, with only a dozen false positives.

For this purpose, I have used Google’s Closure Compiler. This tool detects errors, supports a simple vocabulary for documentation/annotations, fails only if very clear errors are detected (typically syntax errors) and provides as metric a percentage of well-typed code. It does not accept JavaScript 1.7 yet, unfortunately, but this can certainly be added.

I also know of existing academic work to provide static type-checking for JavaScript, although I am unsure as to the maturity of such works.

Finally, Mozilla is currently working on a different type inference mechanism for JavaScript. While this mechanism is not primarily aimed at finding errors, my personal intuition is that it may be possible to repurpose it.

What’s next?

I hope that I have convinced you of the interest of investigating manners of introducing static, strong type-checking to JavaScript. In a second part, I will detail how and where I believe that this can be done in Mozilla.

Next performance: OWASP 2010

June 22, 2010 § Leave a comment

I haven’t had much time to update this blog in the past few months. Well, the good news is that all this time — mostly spent on OPA — is starting to pay off. I’m starting to like our OPA platform quite a lot. Our next release, OPA S3, is shaping up to be absolutely great.

I’m now on my way to OWASP AppSec Research 2010, where I’ll present some of the core design of OPA. Normally, my slides will be made public after the talk, so I’ll try and link them here as soon as I return.

In the meantime, if you’re curious about OPA, I’m starring in a few Dailymotion tutorial slideshows :)

An IRC channel for OPA

January 26, 2010 § Leave a comment

Just a short entry to inform you that we now have an IRC channel for general discussion about OPA. It’s on Freenode and it’s called, well, #opa.

We call it OPA

November 28, 2009 § 7 Comments

Web applications are nice. They’re useful, they’re cross-platform, users need no installation, no upgrades, no maintenance, not even the computing or storage power to which they are used. As weird as it may sound, I’ve even seen announcements for web applications supposed to run your games on distant high-end computers so that you can actually play on low-end computers. Go web application!

Of course, there are a few downsides to web applications. Firstly, they require a web connexion. Secondly, they are largely composed of plumbing. Finally, ensuring their security is a constant fight.

« Read the rest of this entry »

Where Am I?

You are currently browsing entries tagged with safety at Il y a du thé renversé au bord de la table.

Follow

Get every new post delivered to your Inbox.

Join 30 other followers