Wednesday, May 20, 2009

collaborative security

Was watching a video of Aza Raskin and, around 18:00, I got excited. Can we treat security as a people problem?

I've been mulling about this both in my work in overcoming data silos and in extracting models of applications. In the former, the user might want to add extra security to an app like google calendar, say by doing special permissions on for a particular event or even encrypting data before Google sees it, and, in the model extraction, I'd like users to pool their models together to collaboratively get bigger ones -- but I don't want stuff like bank account info to leak over. This latter problem occurs slightly differently in some of my work in mashup security: can we trust an extension to translate a webpage, but not, say, leak a bank account number?

Everyone, including Aza, bashed on the UAC: we can't just pepper users with dialog boxes. We really want things like blacklists That Just Work. Aza asks, just as we might trust a smart nephew to buy us a computer, might we trust one to figure out security for us? In the absence of a smart nephew, can we learn the security policy? What do cautious people normally say to a dialog box? Is there a bit of information on a page that users generally mark as privileged?

In three of my projects so far, I've found cases where I didn't think the application writer could a priori determine the appropriate action, yet doubt that the casual web user can either. What would it mean to build a browser or application extension that outsources security?

No comments:

Post a Comment