Revisiting uncaught asynchronous errors in the Mozilla Platform

May 30, 2014 § Leave a comment

Consider the following feature and its xpcshell test:

// In a module Foo
function doSomething() {
  // ...
  OS.File.writeAtomic("/an invalid path", "foo");
  // ...
}

// In the corresponding unit test
add_task(function*() {
  // ...
  Foo.doSomething();
  // ...
});

Function doSomething is obviously wrong, as it performs a write operation that cannot succeed. Until we started our work on uncaught asynchronous errors, the test passed without any warning. A few months ago, we managed to rework Promise to ensure that the test at least produced a warning. Now, this test will actually fail with the following message:

A promise chain failed to handle a rejection – Error during operation ‘write’ at …

This is particularly useful for tracking subsystems that completely forget to handle errors or tasks that forget to call yield.

Who is affected?

This change does not affect the runtime behavior of application, only test suites.

  • xpcshell: landed as part of bug 976205;
  • mochitest / devtools tests: waiting for all existing offending tests to be fixed, code is ready as part of bug 1016387;
  • add-on sdk: no started, bug 998277.

This change only affects the use of Promise.jsm. Support for DOM Promise is in bug 989960.

Details

We obtain a rejected Promise by:

  • throwing from inside a Task; or
  • throwing from a Promise handler; or
  • calling Promise.reject.

A rejection can be handled by any client of the rejected promise by registering a rejection handler. To complicate things, the rejection handler can be registered either before the rejection or after it.

In this series of patches, we cause a test failure if we end up with a Promise that is rejected and has no rejection handler either:

  • immediately after the Promise is garbage-collected;
  • at the end of the add_task during which the rejection took place;
  • at the end of the entire xpcshell test;

(whichever comes first).

Opting out

There are extremely few tests that should need to raise asynchronous errors and not catch them. So far, we have needed this two tests: one that tests the asynchronous error mechanism itself and another one that willingly crashes subprocesses to ensure that Firefox remains stable.

You should not need to opt out of this mechanism. However, if you absolutely need to, we have a mechanism for opting out. For more details, see object Promise.Debugging in Promise.jsm.

Any question?

Feel free to contact either me or Paolo Amadini.

Shutting down Asynchronously, part 2

May 26, 2014 § Leave a comment

During shutdown of Firefox, subsystems are closed one after another. AsyncShutdown is a module dedicated to express shutdown-time dependencies between:

  • services and their clients;
  • shutdown phases (e.g. profile-before-change) and their clients.

Barriers: Expressing shutdown dependencies towards a service

Consider a service FooService. At some point during the shutdown of the process, this service needs to:

  • inform its clients that it is about to shut down;
  • wait until the clients have completed their final operations based on FooService (often asynchronously);
  • only then shut itself down.

This may be expressed as an instance of AsyncShutdown.Barrier. An instance of AsyncShutdown.Barrier provides:

  • a capability client that may be published to clients, to let them register or unregister blockers;
  • methods for the owner of the barrier to let it consult the state of blockers and wait until all client-registered blockers have been resolved.

Shutdown timeouts

By design, an instance of AsyncShutdown.Barrier will cause a crash if it takes more than 60 seconds awake for its clients to lift or remove their blockers (awake meaning that seconds during which the computer is asleep or too busy to do anything are not counted). This mechanism helps ensure that we do not leave the process in a state in which it can neither proceed with shutdown nor be relaunched.

If the CrashReporter is enabled, this crash will report: – the name of the barrier that failed; – for each blocker that has not been released yet:

  • the name of the blocker;
  • the state of the blocker, if a state function has been provided (see AsyncShutdown.Barrier.state).

Example 1: Simple Barrier client

The following snippet presents an example of a client of FooService that has a shutdown dependency upon FooService. In this case, the client wishes to ensure that FooService is not shutdown before some state has been reached. An example is clients that need write data asynchronously and need to ensure that they have fully written their state to disk before shutdown, even if due to some user manipulation shutdown takes place immediately.

// Some client of FooService called FooClient

Components.utils.import("resource://gre/modules/FooService.jsm", this);

// FooService.shutdown is the `client` capability of a `Barrier`.
// See example 2 for the definition of `FooService.shutdown`
FooService.shutdown.addBlocker(
  "FooClient: Need to make sure that we have reached some state",
  () => promiseReachedSomeState
);
// promiseReachedSomeState should be an instance of Promise resolved once
// we have reached the expected state

Example 2: Simple Barrier owner

The following snippet presents an example of a service FooService that wishes to ensure that all clients have had a chance to complete any outstanding operations before FooService shuts down.

    // Module FooService

    Components.utils.import("resource://gre/modules/AsyncShutdown.jsm", this);
    Components.utils.import("resource://gre/modules/Task.jsm", this);

    this.exports = ["FooService"];

    let shutdown = new AsyncShutdown.Barrier("FooService: Waiting for clients before shutting down");

    // Export the `client` capability, to let clients register shutdown blockers
    FooService.shutdown = shutdown.client;

    // This Task should be triggered at some point during shutdown, generally
    // as a client to another Barrier or Phase. Triggering this Task is not covered
    // in this snippet.
    let onshutdown = Task.async(function*() {
      // Wait for all registered clients to have lifted the barrier
      yield shutdown.wait();

      // Now deactivate FooService itself.
      // ...
    });

Frequently, a service that owns a AsyncShutdown.Barrier is itself a client of another Barrier.

 

Example 3: More sophisticated Barrier client

The following snippet presents FooClient2, a more sophisticated client of FooService that needs to perform a number of operations during shutdown but before the shutdown of FooService. Also, given that this client is more sophisticated, we provide a function returning the state of FooClient2 during shutdown. If for some reason FooClient2’s blocker is never lifted, this state can be reported as part of a crash report.

    // Some client of FooService called FooClient2

    Components.utils.import("resource://gre/modules/FooService.jsm", this);

    FooService.shutdown.addBlocker(
      "FooClient2: Collecting data, writing it to disk and shutting down",
      () => Blocker.wait(),
      () => Blocker.state
    );

    let Blocker = {
      // This field contains information on the status of the blocker.
      // It can be any JSON serializable object.
      state: "Not started",

      wait: Task.async(function*() {
        // This method is called once FooService starts informing its clients that
        // FooService wishes to shut down.

        // Update the state as we go. If the Barrier is used in conjunction with
        // a Phase, this state will be reported as part of a crash report if FooClient fails
        // to shutdown properly.
        this.state = "Starting";

        let data = yield collectSomeData();
        this.state = "Data collection complete";

        try {
          yield writeSomeDataToDisk(data);
          this.state = "Data successfully written to disk";
        } catch (ex) {
          this.state = "Writing data to disk failed, proceeding with shutdown: " + ex;
        }

        yield FooService.oneLastCall();
        this.state = "Ready";
      }.bind(this)
    };

Example 4: A service with both internal and external dependencies

    // Module FooService2

    Components.utils.import("resource://gre/modules/AsyncShutdown.jsm", this);
    Components.utils.import("resource://gre/modules/Task.jsm", this);
    Components.utils.import("resource://gre/modules/Promise.jsm", this);

    this.exports = ["FooService2"];

    let shutdown = new AsyncShutdown.Barrier("FooService2: Waiting for clients before shutting down");

    // Export the `client` capability, to let clients register shutdown blockers
    FooService2.shutdown = shutdown.client;

    // A second barrier, used to avoid shutting down while any connections are open.
    let connections = new AsyncShutdown.Barrier("FooService2: Waiting for all FooConnections to be closed before shutting down");

    let isClosed = false;

    FooService2.openFooConnection = function(name) {
      if (isClosed) {
        throw new Error("FooService2 is closed");
      }

      let deferred = Promise.defer();
      connections.client.addBlocker("FooService2: Waiting for connection " + name + " to close",  deferred.promise);

      // ...


      return {
        // ...
        // Some FooConnection object. Presumably, it will have additional methods.
        // ...
        close: function() {
          // ...
          // Perform any operation necessary for closing
          // ...

          // Don't hoard blockers.
          connections.client.removeBlocker(deferred.promise);

          // The barrier MUST be lifted, even if removeBlocker has been called.
          deferred.resolve();
        }
      };
    };


    // This Task should be triggered at some point during shutdown, generally
    // as a client to another Barrier. Triggering this Task is not covered
    // in this snippet.
    let onshutdown = Task.async(function*() {
      // Wait for all registered clients to have lifted the barrier.
      // These clients may open instances of FooConnection if they need to.
      yield shutdown.wait();

      // Now stop accepting any other connection request.
      isClosed = true;

      // Wait for all instances of FooConnection to be closed.
      yield connections.wait();

      // Now finish shutting down FooService2
      // ...
    });

Phases: Expressing dependencies towards phases of shutdown

The shutdown of a process takes place by phase, such as: – profileBeforeChange (once this phase is complete, there is no guarantee that the process has access to a profile directory); – webWorkersShutdown (once this phase is complete, JavaScript does not have access to workers anymore); – …

Much as services, phases have clients. For instance, all users of web workers MUST have finished using their web workers before the end of phase webWorkersShutdown.

Module AsyncShutdown provides pre-defined barriers for a set of well-known phases. Each of the barriers provided blocks the corresponding shutdown phase until all clients have lifted their blockers.

List of phases

AsyncShutdown.profileChangeTeardown

The client capability for clients wishing to block asynchronously during observer notification “profile-change-teardown”.

AsyncShutdown.profileBeforeChange

The client capability for clients wishing to block asynchronously during observer notification “profile-change-teardown”. Once the barrier is resolved, clients other than Telemetry MUST NOT access files in the profile directory and clients MUST NOT use Telemetry anymore.

AsyncShutdown.sendTelemetry

The client capability for clients wishing to block asynchronously during observer notification “profile-before-change2”. Once the barrier is resolved, Telemetry must stop its operations.

AsyncShutdown.webWorkersShutdown

The client capability for clients wishing to block asynchronously during observer notification “web-workers-shutdown”. Once the phase is complete, clients MUST NOT use web workers.

Recent changes to OS.File

April 8, 2014 § 5 Comments

A quick post to summarize some of the recent improvements to OS.File.

Encoding/decoding

To write a string, you can now pass the string directly to writeAtomic:

OS.File.writeAtomic(path, "Here is a string", { encoding: "utf-8"})

Similarly, you can now read strings from read:

OS.File.read(path, { encoding: "utf-8" } ); // Resolves to a string.

Doing this is at least as fast as calling TextEncoder/TextDecoder yourself (see below).

Native implementation

OS.File.read has been reimplemented in C++. The main consequence is that this function can now be used safely during startup, without having to wait for the underlying OS.File ChromeWorker to start. Also, decoding (see above) is performed off the main thread, which makes it much faster.

According to my benchmarks, using OS.File.read to read strings is about 2-5x faster than NetUtil.asyncFetch on large files and doesn’t block the main thread for more than 5ms, while asyncFetch performs string decoding on the main thread. Also, it doesn’t perform any main thread I/O by opposition to NetUtil.asyncFetch.

Backups

When using writeAtomic, it is now possible to request existing files to be backed up almost atomically. In many cases, this is a good strategy to ensure that data is safely written to disk, without having to use a flush, which would be expensive for the whole system.

yield OS.File.writeAtomic(path, data, { tmpPath: path + ".tmp", backupTo: path + ".backup} } );

Compression

writeAtomic and read both now support an implementation of lz4 compression

yield OS.File.writeAtomic(path, data, { compression: "lz4"});
yield OS.File.read(path, { compression: "lz4"});

Note that this format will not be understood by any command-line tool. It is somewhat proprietary. Also note that (de)compression is performed on the ChromeWorker thread for the time being, so it doesn’t benefit from the native reimplementation mentioned above.

Creating directories recursively

let dir = OS.Path.join(OS.Constants.Path.profileDir, "a", "b", "c", "d");
yield OS.File.makeDir(dir, { from: OS.Constants.Path.profileDir });

A curse and a blessing

April 7, 2014 § 38 Comments

The curse

When Brendan Eich stepped in as a CEO, Mozilla and him were immediately faced a storm demanding his resignation because of his political opinions. To the best of my knowledge, none of those responsible for the storm were employees of the Mozilla Corporation and only 4 or 5 of them were members of the Mozilla Community (they were part of the Mozilla Foundation, which is a different organization).

When Brendan Eich resigned from his position as an employee of Mozilla, Mozilla was immediately faced by a storm assuming that Brendan Eich had been fired, either because of his opinions or as a surrender to the first storm.

Both storms are still raging, fueled by angry (and dismayed and saddened) crowds and incompetent news reporting.

We will miss Brendan. We have suffered and we will continue suffering from these storms. But we can also salvage from them.

The blessing

Think about it. We are being criticized by angry crowds. But the individuals who form these crowds are not our enemies. Many of them care deeply about Freedom of Speech and are shocked because they believe that we are extinguishing this freedom. Others care primarily about equality, an equality that can seldom be achieved wherever there is no Freedom of Speech.

Freedom of Speech. This is one of the core values of Mozilla, one of the values for which we have been fighting all these years.

We are being criticized by some of the people who need us most. They are our users, or our potential users, and they are getting in touch with us. Through Facebook, through Twitter, through the contribute form, through the governance mailing-list, through our blogs, or in real life discussions.

Some will say that we should ignore them. Some will be tempted to answer anger with anger and criticism with superiority.

Do neither. They are our users. They deserve to be heard.

We should listen to them. We should answer their concerns, not with FAQs or with press releases, but with individual answers, because these concerns are valid. We should explain what really happened. We should show them how Mozilla is largely about defending Freedom of Speech through the Open Web.

So please join the effort to answer the angry crowds. If you can, please reach out to media and the public and get the story out there. If only one person out of a hundred angry users receives the message and decides to join the community and the fight for the open web, we will have salvaged a victory out of the storm.

Wouldn’t it be nice?

April 2, 2014 § 2 Comments

Wouldn’t it be nice if Mozilla were a political party, with a single stance, a single state of mind and a single opinion?

Wouldn’t it be nice if people could decide to vote for or against Mozilla based on a single opinion of its leader?

But that’s not the case. We are Mozilla. We have thousands of different voices. We agree that users must be defended on the web. We fight for privacy and for freedom of speech and for education. On everything else, we might disagree, but that’s ok. We are Mozilla. We won’t let that stop us.

So please don’t ask us to exclude one of our own, no matter how much you disagree with his positions. We are Mozilla. We always disagree on most things that are not our mission. And we move forward, together.

Of course, if you want to change Mozilla, how we work and what we think, there is one way to do it. You can join us. Don’t worry, you don’t have to agree with us on much.

Season 1 Episode 2 – The Fight for File I/O

April 2, 2014 § Leave a comment

Plot Our heroes set out for the first battle. Session Restore’s file I/O was clearly inefficient. Not only was it performing redundant operations, but also it was blocking the main thread doing so. The time had come to take it back. Little did our heroes know that the forces of Regression were lurking and that real battle would be fought long after the I/O had been rewritten and made non-blocking.

For historical reasons, some of Session Restore’s File I/O was quite inefficient. Reading and backing up were performed purely on the main thread, which could cause multi-second pauses in extreme cases, and 100ms+ pauses in common cases. Writing was done mostly off the main thread, but the underlying library used caused accidental main thread I/O, with the same effect, and disk flushing. Disk flushing is extremely inefficient on most operating systems and can quickly bring the whole system to its knees, so needs to be avoided.

Fortunately, OS.File, the (then) new JavaScript library designed to provide off main thread I/O had just become available. Turning Session Restore’s I/O into OS.File-based off main thread I/O was surprisingly simple, and even contributed to make the relevant fragments of the code more readable.

In addition to performing main thread I/O and flushing, Session Restore’s I/O had several immediate weaknesses. One of the weaknesses was due to its crash detection mechanism, that required Session Restore to rewrite sessionstore.js immediately after startup, just to store a boolean indicating that we had not crashed. Recall that the largest sessionsstore.js known to this date weighs 150+Mb, and that 1Mb+ instances represented ~5% of our users. Rewriting all this data (and blocking startup while doing so) for a simple boolean flag was clearly unacceptable. We fixed this issue by separating the crash detection mechanism into its own module and ensuring that it only needed to write a few bytes. Another weakness was due to the backup code, which required a full (and inefficient) copy during startup, whereas a simple renaming would have been sufficient.

Having fixed all of this, we were happy. We were wrong.

Speed improvements?

Sadly, Telemetry archives do not reach back far enough to let me provide data confirming any speed improvement. Note for future perf developers including future self: backup your this data or blog immediately before The Cloud eats it.

As for measuring the effects of a flush, at the moment, we do not have a good way to do this, as the main impact is not on the process itself but on the whole system. The best we can do is measure the total number of flushes, but that doesn’t really help.

Full speed… backwards?

The first indication that something was wrong was a large increase in Telemetry measure SESSIONRESTORED, which measures the total amount of time between the launch of the browser and the moment Session Restore has completed initialization. After a short period of bafflement, we concluded that this increase was normal and was due to a change of initialization order – indeed, since OS.File I/O was executed off the main thread, the results of reading the sessionstore.js file could only be received once the main thread was idle and could receive messages from other threads. While this interpretation was partly correct, it masked a very real problem that we only detected much later. Additionally, during our refactorings, we changed the instant at which Session Restore initialization was executed, which muddled the waters even further.

The second indication arrived much later, when the Metrics team extracted Firefox Health Report data from released versions and got in touch with the Performance team to inform us of a large regression in firstPaint-to-sessionRestored time. For most of our users, Firefox was now taking more than 500ms more to load, which was very bad.

After some time spent understanding the data, attempting to reproduce the measure and bisecting to find out at which changeset the regression had taken place, as well as instrumenting code with additional performance probes, we finally concluded that the problem was due to our use I/O thread, the “SessionWorker”. More precisely, this thread was very slow to launch during startup. Digging deeper, we concluded that the problem was not in the code of the SessionWorker itself, but that the loading of the underlying thread was simply too slow. More precisely, loading was fine on a first run, but on second run, disk I/O contention between the resources required by the worker (the cache for the source code of SessionWorker and its dependencies) and the resources required by the rest of the browser (other source code, but also icons, translation files, etc) slowed down things considerably. Replacing the SessionWorker by a raw use of OS.File would not have improved the situation – ironically, just as the SessionWorker, our fast I/O library was loading slowly because of slow file I/O. Further measurements indicated that this slow loading could take up to 6 seconds in extreme cases, with an average of 340ms.

Once the problem had been identified, we could easily develop a stopgap fix to recover most of the regression. We kept OS.File-based writing, as it was not in cause, but we fell back to NetUtil-based loading, which did not require a JavaScript Worker. According to Firefox Health Report, this returned us to a level close to what we had prior to our changes, although we are still worse by 50-100ms. We are still attempting to find out what causes this regression and whether this regression was indeed caused by our work.

With this stopgap fix in place, we set out to provide a longer-term fix, in the form of a reimplementation of OS.File.read(), the critical function used during startup, that did not need to boot a JavaScript worker to proceed. This second implementation was written in C++ and had a number of additional side-improvements, such as the ability to decode strings off the main thread, and transmit them to the main thread at no cost.

The patch using the new version of OS.File.read() has landed a few days ago. We are still in the process of trying to make sense of Telemetry numbers. While Telemetry indicates that the total time to read and decode the file has considerably increased, the total time between the start of the read and the time we finish startup seems to have decreased nicely by .5 seconds (75th percentile) to 4 seconds (95th percentile). We suspect that we are confronted to yet another case in which concurrency makes performance measurement more difficult.

Shutdown duration?

We have not attempted to measure the duration of shutdown-time I/O at the moment.

Losing data or privacy

By definition, since we write data asynchronously, we never wait until the write is complete before proceeding. In most cases, this is not a problem. However, process shutdown may interrupt the write during its execution. While the APIs we use to write the data ensure that shutdown will never cause a file to be partially written, it may cause us to lose the final write, i.e. 15 seconds of browsing, working, etc. To make things slightly worse, the final write of Session Restore is special, insofar as it removes some information that is considered somewhat privacy-sensitive and that is required for crash recovery but not for a clean restart. The risk already existed before our refactoring, but was increased by our work, as the new I/O model was based on JavaScript workers, which are shutdown earlier than the mechanism previously used, and without ensuring that their work is complete.

While we received no reports of bugs caused by this risk, we solved the issue by plugging Session Restore’s shutdown into AsyncShutdown.

Changing the back-end

One of our initial intuitions when starting with this work was that the back-end format used to store session data (large JSON file) was inefficient and needed to be changed. Before doing so, however, we instrumented the relevant code carefully. As it turns out, we could indeed gain some performance by improving the back-end format, but this would be a relatively small win in comparison with everything else that we have done.

We have several possible designs for a new back-end, but we have decided not to proceed for the time being, as there are still larger gains to be obtained with simpler changes. More on this in future blog entries.

Epilogue

Before setting out on this quest, we were already aware that performance refactorings were often more complex than they appeared. Our various misadventures have confirmed it. I strongly believe that, by changing I/O, we have improved the performance of Session Restore in many ways. Unfortunately, I cannot prove that we have improved runtime (because old data has disappeared), and we are still not certain that we have not regressed start-up.

If there are lessons to be learned, it is that:

  • there is no performance work without performance measurements;
  • once your code is sophisticated enough, measuring and understanding the results is much harder than improving performance.

On the upside, all this work has succeeded at:

  • improving our performance measurements of many points of Session Restore;
  • finding out weaknesses of ChromeWorkers and fixing some of these;
  • finding out weaknesses of OS.File and fixing some of these;
  • fixing Session Restore’s backup code that consumed resources and didn’t really do much useful;
  • avoiding unnecessary performance refactorings where they would not have helped.

The work on improving Session Restore file I/O is still ongoing. For one thing, we are still waiting for confirmation that our latest round of optimizations does not cause unwanted regressions. Also, we are currently working on Talos benchmarks and Telemetry measurements to let us catch such regressions earlier.

This work has also spawned other works for other teams on improving the performance of ChromeWorkers’ startup and communication speed.

In the next episode

Drama. Explosions. Asynchronicity. Electrolysis. And more.

How to be an evil start-up CEO

April 1, 2014 § 3 Comments

Being the CEO of a start-up is fun. Being evil and mischievous is fun. Completely destroying one’s life dream is fun. However, reaching expertise in all three requires considerable subtlety. Here are a few notes for the day I decide to become an evil mischievous start-up CEO.

  1. I will keep in mind that my main currencies are time and credibility, both inside and outside my startup. Therefore, I will make my best to maintain that credibility and save that time.
  2. For this reason, although I pay them, I will describe my employees as trusted colleagues. I will, however, treat them as incompetent children.
  3. Conversely, to ensure credibility, I will encourage my trusted colleagues to worship me.
  4. For some reason, my relationship with trusted colleagues tends to alter when trusted colleagues realize that I lie to them. Which is why I will use threats and dissimulation to ensure that they do not.
  5. Being worthy of worship, I am the sole holder of the truth. Consequently, everything I have just told my investors or prospects is true. Trusted colleagues who fail to base their reality upon my truth will be punished.
  6. I will have trusted advisors, be they COO, CTO, CSO, CFO, GPU, tech leads, mentors, janitors, nannies or anything else. Listening to them is important. However, I know better, so there is no need to take anything they say into account.
  7. Being a CEO is all about taking decisions quickly. For this reason, I will avoid smoking pot or drinking alcohol. I will remain on coke.
  8. One of the roles of my trusted advisors is to help me differentiate the real world from my imagination. Do they wonder aloud whether my Reality Adjustment Factor is misaligned? Well, that is the sign that I should put them on coke, too.
  9. I will realize that some of my trusted advisors might be polite. Therefore, if one of them asks “er… are you really, really sure?”, I will take this as a hint that they may be politely inquiring about my being high on LSD. Since I am actually high on coke, there is nothing to worry about.
  10. If I have to divide my start-up in teams, I can ensure that teams can work in complement of each other. Of course, I can also ensure that teams will be at each other’s throat, which is much more amusing, especially if I live in a country where pitbull fighting is illegal. If I do organize my start-up as a pitbull fighting ring, this will, of course, open the possibility of taking bets to determine which team goes down first.
  11. Nothing motivates trusted colleagues quite as much as calling their colleagues “stupid”, “lazy” or “incompetent”, except perhaps calling them “stupid”, “lazy” or “incompetent” within earshot of said colleagues.
  12. Also, nothing motivates trusted colleagues quite as much as non-existent deadlines for imaginary clients on fabulous contracts. My trusted colleagues will be permitted to thank me for such managerial prowesses.
  13. I will claim that Microsoft, or Google, or Mozilla, or Apple, or Amazon, or Facebook, are a bunch of incompetent morons. They merely got to #1 in their respective sectors while I intend to do nothing less than revolutionize the world!
  14. In the spirit of encouraging team work, I will occasionally let a trusted colleague put his/her name along with mine as a co-author/inventor/creator of the work they have done. This way, in case of problem, it will be easy to find someone to take the blame.
  15. I could organize my company so that decisions are taken at the appropriate level by my trusted colleagues. However, this is clearly inefficient, so all decisions, no matter how small, must go through me. Should this be tiresome, I reserve the right to turn off my cellphone while I sip a Piña Colada in the sun.
  16. I realize that some of my trusted colleagues will be better than me at something. With time, many might end up better than me at most things. I could save face by deciding to take this as a proof of my recruiting skills, but this is hardly as satisfying as belittling their achievements and skills and then firing them.
  17. If I need to get rid of one of my trusted colleagues, I will have three options. The first one is to offer conditions to that trusted colleague for leaving. The second one is to fire the trusted colleague. The last one is to mount a cabal inside my start-up to get that trusted colleague to leave in disgust. The cabal-oriented solution will prove much more diverting, as I will be able to watch the cascade of consequences, plots and counter-plots, the consequent loss of time, productivity and morale, and I will be able to find out just exactly how much a lawyer charges for defending my company in court.
  18. At some point, my dream project will near completion. By then, I will have dreamed up tons of new features, and it only makes sense to start piling them up until requirements dwarf what has been realized so far. My trusted colleagues will certainly complete these final few features in a matter of days. Additionally, by the time my trusted colleagues are done, I can certainly have dreamed up a few new features. Ain’t that great?
  19. Also, a project approaching completion signals that I can reassign everybody to another project. The project will certainly find a way to finish itself.
  20. It may happen that my project cannot have all of the following features: working, released, everything I dreamed it to be. The first two features are not really important, so I can remove either.
  21. If a project has failed, or if the company has pivoted, I will inform my trusted employees. Of course, I might decide to keep the news for after the end of an ongoing death-march-to-finish-the-project-in-emergency. Just imagine the look on their face.
  22. This is also true for my commercially-oriented trusted colleagues. Just imagine them telling to their prospects that everything they have promised until this date is false.
  23. As a CEO, I will be approached by countless people with ideas. With a little effort, this will give me the opportunity to pivot as often as twice a week. My trusted colleagues need the exercise.
  24. Being an avid Mac user, I can do as well as Steve Jobs. Even better, being a Facebook user, I can also do as well as me Mark Zuckerberg. Where else could you find a CEO as exceptional as me?
  25. I will occasionally take breaks, or even vacations. Whenever I do go on vacation, however, I will make sure to keep this a secret. Just think how funny it will be when they realize how much time they have wasted coming by every few minutes to check if I had arrived in the office.
  26. While this is my company, not all trusted colleagues may realize that its money is mine to use as I see fit and that I can invest it in my luxury vacations. Some of them might conclude that I am abusing everybody’s trust, work and livelihood. The simplest solution is to fire them, but I should check with a lawyer whether I could also sue them.
  27. While I may have to lie to potential clients and investors, I will refrain from lying to (alleged) mobsters and secret services. My health matters too much to me.
  28. Also, should I be faced with (alleged) mobsters and secret services, and should I determine that the people in front of me are morons and/or easy milk cows, I will refrain from making this realization obvious to said idiots.
  29. Some of my colleagues will insist that we should not release a product that we have not tested. If they refuse to use our product because they hate it, it means that the product is ready for release. If I refuse to use our project because I hate it, it means that we should restart from scratch, re-implement everything in two weeks, then fire my trusted colleagues.
  30. From time to time, I will feel like taking a scapegoat among my trusted colleagues, because this is a cheap way to vent out frustration at failed projects or meetings. After all, there is no way said trusted colleague could figure out that competition would pay more and grab the experience gained at my company.
  31. If I attempt to enter a market saturated of products that are free-to-use and just as good as mine – possibly even dominated by Google, Microsoft, Apple or Mozilla – my investors and trusted colleagues should not worry, as I have a secret weapon: I am the smartest person in the world.
  32. If my product requires that user give up on their existing data, about their existing code, about their existing infrastructure, I will not be too surprised when users rather decide to give up on my product. That is because users are stupid.
  33. If potential users start describing my sole product as vaporware, I can just retort that Microsoft got around with delivering vaporware for more than 10 years. Of course, Microsoft got to #1 before doing so, but we are almost there ourselves.
  34. Occasionally, I will make mistakes and choose wrong paths for my company. The most amusing manner of getting away with mistakes is to pretend that my trusted colleagues have disobeyed me and taken bad initiatives. Doing so is, of course, a mistake and places the company on a wrong path, which makes things recursively more amusing [1].
  35. My trusted colleagues will often be occupied with non-productive trivialities such as building my product, signing paychecks or hunting for clients. I will have to remind them regularly that I am the sole efficient worker in this company.
  36. As my Reality Distortion Field can tell you, there is no difference between “clients”, “contacts” and “people who have looked at the website”. For instance, if our website gets 30,000 hits per month (or per day), we surely have 30,000 paying users.
  37. Occasionally, my trusted colleagues will reach milestones, possibly even releasable/deployable versions of our product. These milestones/versions will probably not match my dream ideas. The simplest way to deal with such imperfections is to treat these unworthy versions with disgust, refuse to give them a name or number and retroactively add some set of features that must absolutely be completed before the milestone is hit.
  38. Occasionally, I will be unhappy with some of my trusted colleagues, but not sufficiently to fire them. I will assigning them to a punishment project, designed solely for maximal pain. That will teach them!
  39. If one of my projects succeeds, I will know – and remind everybody – that I am the sole reason for this success.
  40. If one of my ideas succeeds, I will attempt to remember if, by any chance, one of my trusted colleagues may have spent some time attempting to convince me that this was a good idea, before this was my idea. If so, I will make sure that he/she understand how the first idea was bad and mine was a stroke of genius. If this is not sufficient, I will fire him/her.
  41. My main argument for selling to potential clients will be that they have been idiots to not use my product/service. With my help, however, they can become intelligent enough to know that they should buy my wares. Am I not generous?
  42. Although I may not have heard of weird concepts such as revision control systems, bug trackers or customer relationship managers, if one of my trusted colleagues informs me that they cannot work without such exotic software, I will simply dismiss my trusted colleague as incompetent. After all, I could do the work without such niceties, so they cannot be really useful, can they?
  43. Once I know of weird concepts such as revision control systems, bug trackers or customer relationship managers, I should probably build one. After all, how hard can it be? Additionally, I am so much smarter than everybody who has built one before us. The world will be so grateful that they will beg to use our product, once we have gotten around to building it.
  44. Should I decide that we need to reinvent yet another revision control systems, bug tracker or customer relationship managers and to use it, I will make sure that the product is tested, at least marginally, before it hosts our critical data. Of course, if my trusted colleagues insist that the product does not work, there is always time to ignore their feedback. Otherwise, how could I watch the revision control system lose all the source code of the revision control system?
  45. Should investors ever decide to audit my company with the perspective of buying it, I will adjust my Reality Perception Filter to ensure that the esteemed idiots rake. This will, of course, not prevent me from complaining to my trusted colleagues that all this work was for naught and for so little money. I would not want them to waste their health by envying me too much.
  46. Of course, to ensure credibility, I will make sure that said trusted colleagues will not receive one cent from the sale. Their health really is that important to me.
  47. As my company will undoubtedly be #1 soon, I will make sure that my salary corresponds to that rank. Unfortunately, trusted colleagues will have to satisfy themselves with salaries slightly below the minimum wage, to ensure the survival of the company. Remember, we are just a start-up.
  48. I might I come from a commercial background, but I know technical matters better than my trusted technical colleagues. After all, I know Excel.
  49. I might I come from a technical background, but I know commercial matters better than my trusted commercial colleagues. After all, I know Excel.
  50. It is a well-known fact that trusted colleagues are wimps and that they burnout faster than matches. I will keep vigilant for such occurrences, not only because it is fun to watch one trusted colleague turn into an improductive bag of nerves that the mere mention of my name will send into fits of dementia, but also because with the proper balance of (mis)management, such burnouts can propagate faster than forest fires.

Thanks for the contributions of Team TGCM, Team Malsain, Team Dixous, Team HokutoNoOpa. No animals were hurt in the process, but I intend to remedy this in part 2, to be released soon.


[1] Or coinductively more amusing, if you want to be precise.

Follow

Get every new post delivered to your Inbox.

Join 33 other followers