Wednesday, May 7, 2008

Social Software

Almost all stages of software design, except for the explicit task of writing code, now takes account for the social reality of releasing big programs for many users. We have smart editors to ease navigation of thousands to millions of lines of code (note: what's the most anyones fit into an IDE?), distributed version control systems, and, increasingly, distributed testing and even compilation. Maybe we're still missing something within the code itself?

Taking time away from coding my JS simulator, I saw a link to a scary Mozilla/Firefox bug report: a developer put a virus into the Vietnamese language pack extension. This isn't a novel scenario - we periodically see CDs, those concrete checkpoints of quality backed by printing costs, being pressed with viruses in them. Looking further in the bug report, we see an unsettling reality: virus definitions are updated every 6 hours, and it takes a long time to check software against them.

My gut reaction is to want to ensure virus checkers are incrementalized - but that just perpetuates old model development. Fortunately, feature-wise, there are a lot of developers on several big open source projects. Unfortunately, security-wise, that's a whole lot of unknown people. Many projects employ a developer tier system to manage the varying layers of trust: you start out only doing bug reports, then bug fixes which get reviewed, and then in charge of specing features, creating them, and code reviewing for them, or even simply managing others. However, this is a very fallible process, and susceptible to subversion.

It does allude to a basic principle: trust builds over time.

Now, I'm wondering - can we incorporate this notion of varying levels of trust of developers in a modularized manner in terms of code capabilities? For example, perhaps code from a new developer can only run in a sandbox, and after the developer is trusted, the same contributed code will compile to be outside of it and thus run faster?
Post a Comment