Mira's defense was just as eye opening to me today as was her proposal last year. The primary motivation for the work is that when we authenticate to perform some action, we reveal who we are, even though all that is import for the action is that we are simply authenticated. For example, the fact that you have a social security number is generally more important than what it is. Similarly, while Apple may want to know all the music you bought on iTunes so they can better target advertisements against you, there is no reason beyond this that you need a unique iTunes account to buy music. The basic solution to these issues, anonymous currency whose duplication can be detected, actually also scratches another itch: it provides an anonymous foundation for incentive systems. As I'll get to below, this is applicable to resource sharing such as for network bandwidth and files in bit torrent style systems with an added touch of anonymity, and as Thea and I speculate, even automatic whitelist-based spam filtering. Mira has focused on getting the crypto up to snuff: tying up theoretical loose ends to build enough building blocks with useful anonymity properties and sufficient efficiency properties so that JJ and Anna's groups are now pursuing a fleet of applications. Applications, and I wager application frameworks, are now more pertinent questions.
The first application (after much discussion on previous theoretical work) was onion routing. Imagine playing a game of telephone where you send a message down a line of friends, with each person whispering it into the ear of the next. This is unsafe; everybody knows the message intended for the last person and the path it will take. So, you can encrypt the message n times and have the next step destination point as part of each message, then transmit. However, as invariably happens in a game of telephone, not only is there no incentive to preserve its integrity, it's actually worth doing something more funny. As an incentive, we can use e-cash: each layer of encryption also has some money wrapped into it. That is not enough, however. To get a person to forward on the message, the money is actually encrypted again, using two keys, but you also get a key of your own. One key to decrypt the money comes from the person before, and another from the one after: you exchange your key with the person from before for theirs, and again with the person after, and thus you unlock your coin. If transactions require money, you must then play tit-for-tat: the way to get coins is to help others. You can apply extensions of this basic idea to most karma systems. What's more, after you get a coin, nobody knows what exchange you got it from, just potentially the person, if it is ever double paid. There's a paper on Mira's website about this.
The second application Thea and I schemed a little about today over a cup of tea. For spam, whitelists are great: you are guaranteed not to lose messages from your friends. However, the difficulty is automating the addition of people to this list: what happens when someone emails you for the first time and therefore is not on the list? Enter e-cash, stage right. When you exchange emails, you exchange coins. E-cash actually supports inflation in a funny sense: in one paper, Mira decided to support the case that a customer crashes in the midst of paying. If communication protocols are written correctly, this isn't so much of a concern. However, some reason she did it, and has the result that paying with the same money k times can be allowed. That means every time you get one coin, you actually get k of them. I think it was a little more subtle than that: you get at most k, and if you go through a lot of exchanges with people, you aren't too sure how much you have. I'm not 100% on it: if that's true, you have inflation, if not, each coin just has its value multiplied by k and thus no true change. Either way, a coin gives a message benefit of the doubt, and a whitelist is exactly that. The first email may use a coin, and a response causes an implicit placement on a whitelist. There's mobility in what checking happens where; whitelists can be private data, so all checking can be on the server, or distributed and prioritized. The more a person spams in such a system, the more catchable (eg, duplicated coins) they are. An interesting question is newsletters: the newsletter shouldn't send register emails as it'd run out of coins if attacked with fake register requests, but if users send coins, the newsletter would become a money sink. It might be argued that users talk to more people than newsletters they register to so that this isn't a problem, but I'm not too satisfied.
I think e-cash is one of those technologies about to break out. Furthermore, the tools surrounding it may change how we deal with the web: no more of this Google-knows-all mentality. Finally, as a PL and SE guy, it opens a lot of development questions: how do we actually use this stuff? Can I build a CMS that facilitates anonymous capabilities, or a library to make such CMSs? Wikileaks is one of the few websites even starting to come close, and I suspect it's already fairly limited with respect to the general model being built.
Anyways, neat-o. I also talked to Shriram a bit about future web trends, optimistic replication (what's the real problem here?) and local storage, and again about capability based security (forces a re-encoding of access control all over again for anything interesting, is hell for informal debugging analysis and formal verification, and doesn't play well in multilingual and modal settings). Also, a question I've been toying with: should MAC make a comeback, perhaps as a subversive element of local/server browser storage consistency APIs? I gave an example on the flapjax support list for wanting universal data security semantics between applications:
I'll rehash the old idea that an environment should support a 'shared' use access control mode that is some join point of capabilities of all users acting. (*Aside for others: there are multiple useful ways to merge users' capabilities.) Different data should be displayed and otherwise interacted with based on the context; if I wanted to show
something on my screen, I might be concerned about both confidentiality and integrity of data in all open applications. I can close everything manually... but why not have a standard API and have everything react to the necessary extent? Wasn't that part of the point of reactive security objects for all data? My bank page in another tab or window could stay mostly open, but really private data would disappear. If I could tag particular emails as super secret, I could even safely show a gmail without worrying about preview features on others!
More paper writing tomorrow and then flying back to Berkeley!