Tuesday, May 12, 2009

Android != Web

I get asked three questions pretty frequently when I mention I'm trying to parallelize web browsers as a way to make phones faster.

First, folks ask about Chrome. No, not like Chrome -- parallel processes might as well be concurrent; the point there was OS/hardware enforced address and maybe time/resource scheduling separation to provide security guarantees.*

Second, what about V8, Tamarin, Tracemonkey, etc.? These are awesome and I wish I had skills like that. However, two caveats. First, most of the time in a browser is not spent in the JavaScript runtime. Therefore, despite being a language geek, that's not what I'm working on speeding up. Second, Proebstring's Law tells us that compiler speedups give us 4% speedup a year while new hardware give us 60%. Now that JavaScript is getting serious compiler attention (e.g., not being interpreted), I wouldn't be surprised at maybe another 2-3x speedup over the next couple of years. However, then it'll reach a similar state to Java and Proebstring's Law will apply. If you consider that we can effectively get an extra core of performance every year or two, perhaps maybe we should listen to Proebstring and take advantage of the hardware.

Finally, folks wonder about the iPhone, Android, and their relation to the web. A surprising thing here is that, despite Google and Apple both pushing the web as a platform (viz. Chrome and Safari, respectively), they are also torpedoing it. Contrary to mass perception, my technology-driven understanding is that Android and the iPhone SDK are anti-web. Currently, they're a necessary evil for performance reasons, but they are also distinctly outside of the web ecosystem. I am working to make high-level domain specific web languages (e.g., CSS) fast enough to avoid the need for a return to such lower-level systems.**

Back to work...


*Interestingly enough, the Chrome security model isn't good for say, mashups or extensions, and there are faster ways to achieve what it is currently being used for (also, in part, researched by Google!). I think pragmatism, such as for time-to-market concerns, had a sad impact here.

**Android is interesting and important for many other reasons, such as opening up phone functionality and a push towards rethinking the integration of a browser into an operating system.

3 comments:

A. said...

Could you please elaborate on Chrome's security model and possible alternatives?

lmeyerov said...

The security model focuses on limiting the escalation of attacks on the runtime. If there's a bug in rendering engine (buffer overflow, complexity attack, etc.), the separation helps mitigate this.

Simple reachability analyses (e.g., recent work by Joel Weinberger) shows just how frail this has been; capabilities have been leaking. Furthermore, securing higher level abstractions -- stuff in JavaScript -- is only starting (Caja etc.). Once you cordon low level access off and only have it in higher levels, techniques like process isolation start looking like an expense. For example, I looked at postMessage: instead of encoding objects as strings and decoding them, some of us created a membrane system. postMessage doesn't preventing capability leaks (e.g., encoding references and getting confused deputies in the proxies), so it wasn't hitting the problem and just sat as an expense.

For extensions.. last time I checked, there wasn't any support for them in Chrome. Our membrane work was targeted for the mashup case; you need to do more for extensions (limit core functionality access, think about instrumentation order), but similar ideas might apply. Putting access control into the scripting layer is fast for the slow case: modern engines should either cache (webkit) such checks or compile them away (tracemonkey).

There are a bunch of papers in Oakland this year about this stuff..

A. said...

Very interesting, thank you!