Re-dreaming Firefox (3): Identities

June 5, 2015 § 8 Comments

Gerv’s recent post on the Jeeves Test got me thinking of the Firefox of my dreams. So I decided to write down a few ideas on how I would like to experience the web. Today: Identities. Let me emphasise that the features described in this blog post do not exist.

Sacha has a Facebook account, plus two Gmail accounts and one Microsoft Live identity. Sacha is also present on Twitter, both with a personal account, and as the current owner of his company’s account. Sacha also has an account on his bank, another one on Paypal, and one on Amazon. With any browser other than Firefox, Sacha’s online life would be a bit complicated.

For one thing, Sacha is logged to several of these accounts most of the time. Sacha has been told that this makes him easy to track, not just when he’s on Facebook, but also when he visits blogs, or news sites, or even shopping sites, but really, who has time to log off from any account? With any other browser, or with an older version of Firefox, Sacha would have no online privacy. Fortunately, Sacha is using Firefox, which has grown pretty good at handling identities.

Indeed, Firefox knows the difference between Facebook’s (and Google’s, etc.) main sites, for which Sacha may need to be logged, and the tracking devices installed on other sites through ads, or through the Like button (and Google +1, etc.), which are pure nuisances. So, even when Sacha is logged on Facebook, his identity remains hidden from the tracking devices. To put it differently, Sacha is logged to Facebook only on Facebook tabs, and only while he’s using Facebook in these tabs. And since Sacha has two GMail accounts, his logging on each account doesn’t interact with the other account. This feature is good not only for privacy, but also for security, as it considerably mitigates the danger of Cross-Site Scripting attacks. Conversely, if a third-party website uses Facebook as an identity provider, Firefox can detect this automatically, and handle the log-in.

Privacy doesn’t stop there. Firefox has a database of Terms of Service for most websites. Whenever Firefox detects that Sacha is entering his e-mail address, or his phone number, or his physical address, Firefox can tell Sacha if he’s signing up for spam or telemarketing – and take measures to avoid it. If Sacha is signing up for spam, Firefox can automatically create an e-mail alias specific to this website, valid either for a few days, or forever. If Sacha has a provider of phone aliases, Firefox can similarly create a phone alias specific to the website, valid either for a few days, or forever. Similarly, if Sacha’s bank offers temporary credit card numbers, Firefox can automatically create a single-transaction credit card number.

Firefox offers an Identity Panel (if we release this feature, it will, of course, be called Persona) that lets Sacha find out exactly which site is linked to which identity, and grant or revoke authorizations to log-in automatically when visiting such sites, as well as log in or out from a single place. In effect, this behaves as a Internet-wide Single Sign On across identities. With a little help, Firefox can even be taught about lesser known identity providers, such as Sacha’s company’s Single Sign On, and handle them from the same panel. That Identity Panel also keeps track of e-mail aliases, and can be used to revoke spam- and telemarketing-inducing aliases in just two clicks.

Also, security has improved a lot. Firefox can automatically generate strong passwords – it even has a database of sites which accept accept passphrases, or are restricted to 8 characters, etc. Firefox can also detect when Sacha uses the same password on two unrelated sites, and explain him why this is a bad idea. Since Firefox can safely and securely share passwords with other devices and back them up into the cloud, or to encrypted QR Codes that Sacha can safely keep in his wallet, Sacha doesn’t even need to see passwords. Since Firefox handles the passwords, it can download every day a list of websites that are known to have been hacked, and use it to change passwords semi-automatically if necessary.

Security doesn’t stop there. The Identity Panel knows not only about passwords and identity providers, but also about the kind of information that Sacha has provided to each website. This includes Sacha’s e-mail address and physical address, Sacha’s phone number, and also Sacha’s credit card number. So when Firefox finds out that a website to which Sacha subscribes has been hacked, Sacha is informed immediately of the risks. This extends to less material information, such as Sacha’s personal blog of vacation pictures, which Sacha needs to check immediately to find out whether they have been defaced.

What now?

I would like to browse with this Firefox. Would you?

Stages chez Mozilla Paris… ou ailleurs

November 19, 2011 § 1 Comment

edit Nous sommes pleins jusqu’à Juin. Nous ne pouvons plus prendre de stagiaires sur Paris dont les stages commencent avant Juin.

Comme tous les ans, Mozilla propose des stages en informatique, orientés Développement, R&D ou Recherche. Selon le sujet, le stage peut vous emmener à Paris, aux États-Unis, au Canada, en Chine…

À propos de Mozilla

La Fondation Mozilla est une association à but non-lucratif, fondée pour encourager un Internet ouvert, innovant et participatif. Vous avez probablement entendu parler de Mozilla Firefox, le navigateur open-source qui a réintroduit sur le web les standards ouverts et la sécurité, ou de Mozilla Thunderbird, le client de messagerie multi-plateforme, open-source et extensible. Les activités de Mozilla ne s’arrêtent pas à ces deux produits et se prolongent à de nombreux projets pour le présent et l’avenir, tels que :

  • Boot-to-Gecko, système d’exploitation totalement ouvert et construit par la communauté, pour les téléphones portables, tablettes et autres machines communicantes ;

  • SpiderMonkey, une famille de Machines Virtuelles conçues pour l’analyse statique et dynamique, la compilation et l’exécution des langages web, en particulier JavaScript ;
  • DeHydra et JSHydra, outils d’analyse statique pour les langages C++ et JavaScript ;

  • Rust, un nouveau langage de programmation conçu pour le développement d’applications système parallèles sûres ;

  • WebAPI, un ensemble d’outils qui permettent d’étendre les capacités des applications web au-delà de celles des applications traditionnelles, la sécurité et la confidentialité en plus ;

  • Gecko, le moteur de rendu extensible et portable pour le HTML, le XML et les interfaces graphiques, qui a permis Firefox, Thunderbird et de nombreuses autres applications ;

  • BrowserID, une technique innovante qui fournit aux utilisateurs et aux développeurs les outils cryptographiques pour assurer l’identification sur le web, sans compromettre la vie privée, la simplicité ou la sécurité ;

  • les fonctionnalités Mozilla Services de gestion d’identité par le Cloud ;

  • et d’autres encore…

À propos de vous

Mozilla proposes plusieurs stages dans ses installations à travers le monde sur de nombreux sujets.

Votre profil :

  • vous voulez faire du web un endroit meilleur, sur lequel chacun peut naviguer et contribuer en toute sécurité, sans avoir à craindre pour sa sécurité ou sa vie privée ;
  • vous souhaitez prendre part à un projet utilisé par plus de 33% de la population du web ;
  • vous voulez que votre travail soit utile à tous et visible par tous ;
  • vous avez de fortes compétences en Algorithmique et en Informatique ;
  • vous avez de fortes compétences dans au moins l’un des domaines suivants :
    • systèmes d’exploitation ;
    • réseaux ;
    • géométrie algorithmique ;
    • compilation ;
    • cryptographie ;
    • analyse statique ;
    • langages de programmation ;
    • extraire des informations pertinentes à partir de données exotiques ;
    • algorithmique distribuée ;
    • le web en tant que plate-forme ;
    • interactions avec les communautés du logiciel libre ;
    • toute autre compétence qui, à votre avis, pourrait nous servir.
  • sur certains sujets, un excellent niveau d’Anglais peut être indispensable ;
  • les stages sont généralement prévus pour des étudiants M1 ou M2 mais si vous arrivez à nous impressionner par vos réalisations ou par vos connaissances, le diplôme n’est pas indispensable.

Si vous vous reconnaissez, nous vous invitons à nous contacter. En fonction du sujet, les stages peuvent vous emmener à Paris, Mountain View, San Francisco, Toronto, Taipei, ou d’autres lieux à travers le monde.

Les meilleurs stagiaires peuvent espérer un contrat freelance, un CDI ou/et une bourse de doctorat.

Pour nous contacter

Pour toute question, contactez :

  • pour tout ce qui concerne les stages chez Mozilla, Julie Deroche (à mozilla.com, jderoche) – Mozilla Mountain View, College Recruiting ;
  • pour les stages à Paris, David Rajchenbach-Teller (à mozilla.com, dteller) – Mozilla Paris, Développeur / Chercheur.

JavaScript, this static language (part 1)

October 20, 2011 § 7 Comments

tl;dr

JavaScript is a dynamic language. However, by borrowing a few pages from static languages – and a few existing tools – we can considerable improve reliability and maintainability.

« Writing one million lines of code of JavaScript is simply impossible »

(source: speaker in a recent open-source conference)

JavaScript is a dynamic language – a very dynamic one, in which programs can rewrite themselves, objects may lose or gain methods through side-effects on themselves on on their prototypes, and, more generally, nothing is fixed.

And dynamic languages are fun. They make writing code simple and fast. They are vastly more suited to prototyping than static languages. Dynamism also makes it possible to write extremely powerful tools that can perform JIT translation from other syntaxes, add missing features to existing classes and functions and more generally fully customize the experience of the developer.

Unfortunately, such dynamism comes with severe drawbacks. Safety-minded developers will tell you that, because of this dynamism, they simply cannot trust any snippet, as this snippet may behave in a manner that does not match its source code. They will conclude that you cannot write safe, or even modular, applications in JavaScript.

Many engineering-minded developers will also tell you that they simply cannot work in JavaScript, and they will not have much difficulty finding examples of situations in which the use of a dynamic language in a complex project can, effectively, kill the project. If you do not believe them, consider a large codebase, and the (rather common) case of a large transversal refactoring, for instance to replace an obsolete API by a newer one. Do this in Java (or, even better, in a more modern mostly-static language such as OCaml, Haskell, F# or Scala), and you can use the compiler to automatically and immediately spot any place where the API has not been updated, and will spot a number of errors that you may have made with the refactoring. Even better, if the API was designed to be safe-by-design, the compiler will automatically spot even complex errors that you may have done during refactoring, including calling functions/methods in the wrong order, or ownership errors. Do the same in JavaScript and, while your code will be written faster, you should expect to be hunting bugs weeks or even months later.

I know that the Python community has considerably suffered from such problems during version transitions. I am less familiar with the world of PHP, but I believe this is no accident that Facebook is progressively arming itself with PHP static analysis tools. I also believe that this is no accident that Google is now introducing a typed language as a candidate replacement for JavaScript.

That is because today is the turn of JavaScript, or if not today, surely tomorrow. I have seen applications consisting in hundreds of thousands of lines of JavaScript. And if just maintaining these applications is not difficult enough, the rapid release cycles of both  Mozilla and Chrome, mean that external and internal APIs are now changing every six weeks. This means breakage. And, more precisely, this means that we need new tools to help us predict breakages and help developers (both add-on developers and browser contributors) react before these breakages hit their users.

So let’s do something about it. Let’s make our JavaScript a strongly, statically typed language!

Or let’s do something a little smarter.

JavaScript, with discipline

At this point, I would like to ask readers to please kindly stop preparing tar and feathers for me. I realize fully that JavaScript is a dynamic language and that turning it into a static language will certainly result in something quite disagreeable to use. Something that is verbose, has lost most of the power of JavaScript, and gained no safety guarantees.

Trust me on this, there is a way to obtain the best of both worlds, without sacrificing anything. Before discussing the manner in which we can attain this, let us first set objectives that we can hope to achieve with a type-disciplined JavaScript.

Finding errors

The main benefit of strong, static typing, is that it helps find errors.

  • Even the simplest analyses can find all syntax errors, all unbound variables, all variables bound several times and consequently almost all scoping errors, which can already save considerable time for developers. Such an analysis requires no human intervention from the developer besides, of course, fixing any error that has been thus detected. As a bonus, in most cases, the analysis can suggest fixes.
  • Similarly trivial forms of analysis can also detect suspicious calls to break or continue, weird uses of switch(), suspicious calls to private fields of objects, as well as suspicious occurrences of eval – in my book, eval is always suspicious.
  • Slightly more sophisticated analyses can find most occurrences of functions or methods invoked with the wrong number of arguments. Again, this is without human intervention. With type annotations/documentation, we can move from most occurrences to all occurrences.
  • This same analysis, when applied to public APIs, can provide developers with more informations regarding how their code can be (mis)used.
  • At the same level of complexity, analysis can find most erroneous access to fields/methods, suspicious array traversals, suspicious calls to iterators/generators, etc. Again, with type annotations/documentation, we can move from most to all.
  • Going a little further in complexity, analysis can find fragile uses of this, uncaught exceptions, etc.

Types as documentation

Public APIs must be documented. This is true in any language, no matter where it stands on the static/dynamic scale. In static languages, one may observe how documentation generation tools insert type information, either from annotations provided by the user (as in Java/JavaDoc) or from type information inferred by the compiler (as in OCaml/OCamlDoc). But look at the documentation of Python, Erlang or JavaScript libraries and you will find the exact same information, either clearly labelled or hidden somewhere in the prose: every single value/function/method comes with a form of type signature, whether formal or informal.

In other words, type information is a critical piece of documentation. If JavaScript developers provide explicit type annotations along with their public APIs, they have simply advanced the documentation, not wasted time. Even better, if such type can be automatically inferred from the source code, this piece of documentation can be automatically written by the type-checker.

Types as QA metric

While disciples of type-checking tend to consider typing as something boolean, the truth is more subtle: it quite possible that one piece of code does not pass type-checking while the rest of the code does. Indeed, with advanced type systems that do not support decidable type inference, this is only to be expected.

The direct consequence is that type-checking can be seen as a spectrum of quality. A code can be seen as failing if the static checking phrase can detect evident errors, typically unbound values or out-of-scope break, continue, etc. Otherwise, every attempt to type a value that results in a type error is a hint of poor QA practice that can be reported to the developer. This yields a percentage of values that can be typed – obtain 100% and get a QA stamp of approval for this specific metric.

Typed JavaScript, in practice

Most of the previous paragraphs are already possible in practice, with existing tools. Indeed, I have personally experienced using JavaScript static type checking as a bug-finding tool and a QA metric. On the first day, this technique has helped me find both plenty of dead code and 750+ errors, with only a dozen false positives.

For this purpose, I have used Google’s Closure Compiler. This tool detects errors, supports a simple vocabulary for documentation/annotations, fails only if very clear errors are detected (typically syntax errors) and provides as metric a percentage of well-typed code. It does not accept JavaScript 1.7 yet, unfortunately, but this can certainly be added.

I also know of existing academic work to provide static type-checking for JavaScript, although I am unsure as to the maturity of such works.

Finally, Mozilla is currently working on a different type inference mechanism for JavaScript. While this mechanism is not primarily aimed at finding errors, my personal intuition is that it may be possible to repurpose it.

What’s next?

I hope that I have convinced you of the interest of investigating manners of introducing static, strong type-checking to JavaScript. In a second part, I will detail how and where I believe that this can be done in Mozilla.

Unbreaking Scalable Web Development, One Loc at a Time

May 23, 2011 § 18 Comments

The Opa platform was created to address the problem of developing secure, scalable web applications. Opa is a commercially supported open-source programming language designed for web, concurrency, distribution, scalability and security. We have entered closed beta and the code will be released soon on http://opalang.org, as an Owasp project .

If you are a true coder, sometimes, you meet a problem so irritating, or a solution so clumsy, that challenging it is a matter of engineering pride. I assume that many of the greatest technologies we have today were born from such challenges, from OpenGL to the web itself. The pain of pure LAMP-based web development begat Ruby on Rails, Django or Node.js, as well as the current NoSQL generation. Similarly, the pains of scalable large system development with raw tools begat Erlang, Map/Reduce or Project Voldemort.

Opa was born from the pains of developing scalable, secure web applications. Because, for all the merits of existing solutions, we just knew that we could do much, much better.

Unsurprisingly, getting there was quite a challenge. Between the initial idea and an actual platform lay blood, sweat and code, many experiments and failed prototypes, but finally, we got there. After years of development and real-scale testing, we are now getting ready to release the result.

The parents are proud to finally introduce Opa. « Read the rest of this entry »

[Rant] Web development is just broken

April 18, 2011 § 104 Comments

No, really, there’s something deeply flawed with web development. And I’m not (just) talking browser incompatibilities, or Ruby vs. Java vs. PHP vs. anything else or about Flash or HTML5, I’m talking about something deeper and fundamental.

Edit Added TL;DR

Edit Interesting followups on Hacker News and Reddit.

Edit Added a comparison with PC programming.

Edit Added a follow up introducing the Opa project.

TL;DR

The situation

  • Plenty of technologies somehow piled on top of each other.

My nightmares:

  • Dependency nightmare
  • Documentation nightmare
  • Scalability nightmare
  • Glue nightmare
  • Security nightmare

Just to clarify: None of these nightmares is about having to make choices.

My dream:

  • Single computer development had the same set of issues in the 80s-90s. They’re now essentially solved. Let’s do the same for the web.

Shop before you code

Say you want to write a web applications. If it’s not a trivial web application, chances are that you’ll need to choose:

  • a programming language;
  • a web server;
  • a database management system;
  • a server-side web framework;
  • a client-side web framework.

And you do need all the above. A programming language, because you’re about to program something, so no surprise here. You also need a web server, because you’re about to write a web application and you need to deliver it, so again, no surprise.

A database management system, because you’ll want to save data and/or to share data, and it’s just too dangerous to access the file system. Strangely, though, your programming language will give you access to the file system, and it’s somewhere else, at the operating system layer, that you’ll have to restrict this. Now, depending on your application and your DBMS, your data may fit completely with the DBMS, but often, that’s not the case, because your application manipulates object-oriented data structures, while your DBMS manipulates either records and relations or keys and values. And at this stage, you have two possibilities: either you forcefeed your data into your database – essentially reinventing (de)serialization and storage on top of an antagonistic technology – or you add to your stack a form of Object Relational Manager to do this for you.

You need a server-side framework – perhaps more than one –, too, because, let’s face it, at this stage, your (empty) application already feels so complicated that you’ll need all the help you can get to avoid having to reinvent templating, POST/GET management, sessions, etc. Oh, actually, what I wrote above, that’s not quite true: depending on your framework, you may need some access to the file system for your images, your pages, etc. and all other stuff that may or may not fit naturally in the DBMS. So back to the OS layer to configure it more finely.

And finally, you’ll certainly want to write your application for several browsers. As all developers know, there is no such thing as a pair of compatible browsers, and since you certainly don’t want to spend all of your time juggling with (largely undocumented) browser incompatibilities and/or limitations of the JavaScript standard library, it’s time to add a client-side framework, perhaps more.

So, in addition to the first list, you probably have to choose and configure:

  • an OS;
  • OS-level security layers;
  • an ORM (unless you’re reinventing yours) or an approximation thereof;

At this stage, you haven’t written “Hello, world” yet.

On the other hand, you’re about to enter dependency nightmare, because not all web servers fit with all frameworks, not all server-side frameworks with all client-side frameworks, or with all ORMs, or with all OSes, not to mention incompatibilities between OS and DBMS, etc. You also have entered documentation nightmare, because information on how to configure the security layers of your OS is marginal at best, and of course totally separate from information on how to configure your DBMS, or your ORM, or your frameworks, etc.

Note that I haven’t mentioned anything about scaling up yet, because the scaling nightmare would deserve a complete post.

Sure, you will solve all of these issues. You will handpick your tools, discard a few, and eventually, since you’re a developer (you’re a developer, right?), you’ll eventually assemble a full development platform in which every technology somehow accepts to talk to its neighbors. Heavens forbid that you make a mistake at that stage, because once you start with actual coding, there will be no coming back, but yes, you’re now ready to code.

At this stage, a few questions cross my mind:

  • You have reached that stage, because you have the time and skills to do this, but what about Joe beginner? Do they deserve this?
  • Remember that you haven’t written “Hello, world” yet. These hours of your life you have spent to get to this stage, do you have a feeling that they were well-spent?
  • What if you made a mistake, i.e. what if something is subtly incompatible but you haven’t noticed yet, or if one of the technologies you’re using is deprecated, or doesn’t match your security policy, how much time will you spend rooting out all the stuff that’s hardwired with this technology?

So, yes, for all these reasons, I decree that web development is broken. But that’s not all there is to it.

So you have started coding. Good for you.

Now, you have a set of tools that should be sufficient to develop your application – again, possibly not for scaling it up, but that’s a different story. So, you can start coding.

Welcome to the third nightmare: the glue nightmare. What’s the glue? Well, it’s that sticky stuff that you put between two technologies that don’t know about each other, that don’t really fit with each other, but that you need to get working together.

You have data on the client and you want to send it to the server. Time to encode them as JSON or XML, send them with Ajax, open an Ajax entry point on the server (temporary? permanent?), parse the received data on the server (what do you do if it doesn’t parse?), decode the parsed data to your usual data structures, validate the values in these data structures (or should you have done that before decoding?), and then use them (I really hope that you have validated everything carefully). That was the easy part. Now, say you have data on the server and you want to send it to the client. Time to encode them as JSON or XML, send them with Comet – oops, there’s no such thing as “sending with Comet”, so you should open an Ajax entry point on the server (same one? temporary? permanent?) and let the client come and fetch the data (how do you ensure it’s the right client?). Plus the usual parsing and decoding. Except the server code you wrote for parsing and decoding doesn’t work in your browser. Plus, be careful with parsing, because you can get some nasty injections at that stage, or you can just crash a number of JS engines accidentally. Add a little debugging, some more work on garbage-collection and you can send “Hello” from the client to the server or from the server to the client.

Again, the question is not: “can you get this to work?” – I’m sure that you can, many of us do this on a regular basis. The questions are more:

  • was this time well-spent?
  • are you sure that it works?
  • really, really sure?
  • even if browsers can crash?
  • even if users are malicious?
  • how can you be certain?

The client-server glue doesn’t stop here – if only we were so luck. There’s more for handling forms or uploads, or to inject user-generated contents into pages, but let’s move to server-side glue.

Storage is full of glue, too. You have data that fits your application and you’ll want to send it to your DBMS. Now, either you’re using an ORM or you’re encoding the data manually in a manner that somehow fits your database paradigm. That’s already quite a sizable layer of glue, but that’s not all. Sending your application to your DBMS means opening a connection, somehow serializing your data (I hope it’s well-validated, too, just in case someone attempts to inject bogus database requests), somehow serializing your database request, handling random disconnections and sometimes unpredictable (de)serialization errors (by the way, I hope you made sure that the database could never be in an inconsistent state, even if the browser crashes), somehow (de)serializing database responses (can you handle the database not responding at all?) and reconnecting in case of disconnection. Oh, and since your database and your application are certainly based on distinct security paradigms, you’ll have to set up both your application security and your database security, and you’ll have to ensure that they work together. Did I mention handling distinct encodings? Ensuring that temporary bindings are never stored in the database? Performing garbage-collection in the database?

Glue doesn’t stop here, of course. Do I need to get started on REST or SOAP or WSDL?

Again, you’ll solve all these issues, eventually. But if you’re like me, you’ll wonder why you had to spend so much time on so little stuff. These are not features, they are not infrastructure, they are not fun, they’re just glue, to get a shamble of technologies – some of which date back to the 1970s – to work together.

Oh, and at this stage, chances are that you have plenty of security holes. Welcome to the security nighmare. Because your programming language has no idea that this piece of XML or JSON or string will end-up being displayed on a user’s browser (XSSi, anyone?) or stored in the database (SQLi or CouchDB injection, anyone?), or that this URI is actually a command with potentially dangerous side-effects (sounds like a XSRF), etc. By default, any web application is broken, security-wise, in dozens of ways. Most of the web application tutorials I’ve seen on the web contain major security holes. And that’s just not right. If you look at these security issues, you’ll realize that most of them are actually not in the features you coded, not even in the infrastructure you assembled, but in the glue itself. Worse than that: many of these issues are actually trivial things, that could be solve once and for all, but we are stuck using tools that don’t even attempt to solve them. So, chances are that in your application, no matter how good you are, you will forget to validate and/or escape data at least once, that you will forget to authenticate before giving access to one of your resources or that you will forget something that somehow will let a malicious user get into your app and cause all sorts of chaos.

I stand my case. Web development is broken.

History of brokenness

But if we look at it closer, a few years ago, so was PC development. Any non-trivial application needed:

  • application-level code, of course;
  • memory-level hackery just to get past the 640kb barrier (or, equivalently, the handle limitation under Windows – 64kb iirc);
  • IRQ-level coding and/or Windows SDK-level coding (the first one was rather fun, the second was a complete nightmare, neither were remotely meant for anybody who was not seriously crazy);
  • (S)VGA BIOS level hackery to get anything cool to display;
  • BIOS-level fooling.

Five layers of antagonist technologies that needed to get hacked into compliance. The next generation was represented by the NeXT frameworks and attempts to bring the same amount of comfort to Windows-land (OWL, MFC, etc.). Still fragile and complex, but a huge improvement. And the next generation was Java/C#/Python programming. Nowadays, you can install/apt-get/emerge/port install your solution, start coding right away, and be certain that you have everything you need. Nightmare solved.

On the web, we’re still stuck somewhere between the first generation and the second. Why don’t we aim for the third?

A manifesto for web development that works

Time to stop the rant and start thinking positive. All the above is web development that’s broken. Now, what would not-broken web development look like?

Let’s go for the following:

I want to start coding now

        without having to learn configuration, dependencies or deployment;

I don’t want to write no glue

        the web is one platform, time to stop forcing us to treat it as a collection of heterogeneous components;

I don’t want to repeat myself

        so don’t force me to write several validators for the same data, or several libs that do the same thing in different components of my web application;

I don’t care about browser wars

        my standard toolkit must work on all browsers, end of the story;

Give me back my agility

        I want to be able to make important refactorings, to move code around the client, the server, the database component without having to rewrite everything.

Secure by default

        all the low-level security issues must be handled automatically and transparently by the platform, not by me. I’ll concentrate on the high-level application-specific issues.

All of this is definitely possible. So please give it to me this and you’ll make many much happier coders.

Disclaimer My company builds related technology. However, this blogs expresses my personal views.

Post-OWASP AppSec Research

June 28, 2010 § Leave a comment

Well, I’m just back from the Way to Valhalla and OWASP AppSec Research 2010.

The welcome was great, with plenty of people interested in OPA — some of them actually looking enthusiastic. I was quite surprised to realize that a number of researchers, developers and consultants in the web security community are very much aware of the limitations of current-generation approaches to security, but just don’t have the resources to start working on a next-generation approach. Speaking of resources, we’re now getting close to being 7 years into the OPA project, a commitment that not many research groups or companies could make.

Interestingly, during his talk, Dave Wichers, the editor for the OWASP Top 10 Web Application Security Risks project, mentioned that the solution was certainly to switch language and paradigm, to something cleaner and easier to secure. This is, of course, exactly what we have been working on during all these years.

All the slides and videos of the conference should be uploaded soon on the official website. In the meantime, I have uploaded my slides. I’ll try and add some sound if I can work out some sound problems I’ve been encountering recently with my presentations.

Edit The presentation of OPA available on Dailymotion had sound issues. I’ve finally managed to fix them. Enjoy!

Next performance: OWASP 2010

June 22, 2010 § Leave a comment

I haven’t had much time to update this blog in the past few months. Well, the good news is that all this time — mostly spent on OPA — is starting to pay off. I’m starting to like our OPA platform quite a lot. Our next release, OPA S3, is shaping up to be absolutely great.

I’m now on my way to OWASP AppSec Research 2010, where I’ll present some of the core design of OPA. Normally, my slides will be made public after the talk, so I’ll try and link them here as soon as I return.

In the meantime, if you’re curious about OPA, I’m starring in a few Dailymotion tutorial slideshows 🙂

An IRC channel for OPA

January 26, 2010 § Leave a comment

Just a short entry to inform you that we now have an IRC channel for general discussion about OPA. It’s on Freenode and it’s called, well, #opa.

We call it OPA

November 28, 2009 § 7 Comments

Web applications are nice. They’re useful, they’re cross-platform, users need no installation, no upgrades, no maintenance, not even the computing or storage power to which they are used. As weird as it may sound, I’ve even seen announcements for web applications supposed to run your games on distant high-end computers so that you can actually play on low-end computers. Go web application!

Of course, there are a few downsides to web applications. Firstly, they require a web connexion. Secondly, they are largely composed of plumbing. Finally, ensuring their security is a constant fight.

« Read the rest of this entry »

Last lecture

April 22, 2009 § 4 Comments

Note: This post is written on the 79th day of strike of Universities. Despite the overwhelming consensus against these bills, the government has just passed the application decrees implementing the possibility of arbitrarily increasing the teaching charge of researchers, without need for any justification. The government obviously fails to see how much this will hurt Research. Simultaneously, the government has announced that, since the reform of primary and secondary schools cannot proceed in compliance with the government’s own decrees, it will simply proceed illegally. Once the shock is gone, expect increased strike actions. Expect Resarch strikes on publications, on patents, on contracts with the government or French companies. Expect difficulties with baccalauréat, exams and degrees.

Where headhunters had failed, the government has finally succeeded. Today was my last lecture.

As the government obviously doesn’t want Researchers to have the means and time to undertake Research, I have accepted a position in the private sector, where I should be able to pursue my work on semantics, security and functional concurrent/distributed programming languages.

While I’m glad to start in a position where I will have both more leeway and both students and engineers to work with me, I am saddened that the situation had to reach the point where I felt I had no choice.

Barring any accident, starting September 1st, you will be able to find me at MLState.

Where Am I?

You are currently browsing entries tagged with security at Il y a du thé renversé au bord de la table.