- programming with angelic non-determinism (berkeley bias ;-))
- coarse-grained transactions (... brown bias ;-))
- verified JIT
- integrating typed and untyped code in a scripting language (... what happened to Arjun's paper about it? and will anything happen with the es4 effort beyond cutting down trees?)
- monads in action
- paralocks
Only one paper looked like it has a chance of blowing my mind about how we write programs. I must be getting old.
In contrast, Wes Weimer gave a great talk today about applying random program edits to fixing bugs. There's been a lot of work in the area -- my favorite was on doing it for programs with illegible type errors -- Wes' trick of reusing existing program fragments was neat, but more interesting was the evaluation on non-toy programs. It works, and even though I suspect it only succeeds for bugs that are already simple to fix, it suggests we're getting close to realistically rethinking the triage process.
Finally, there was surprise about a suggestion to farm out testing to EC2: the current examples of scaling were 4-8 core machines, not 16-20,000 ones I'm used to -- our group's interest in so little cores for mobiles should be the exception, not rule. I'm seeing a lot of that in the community; e.g., there was an otherwise great talk about optimizing points-to analysis recently that I felt also missed the boat here. If there's an age old algorithm that we need to speed up by a few magnitudes, chances are, you can put it on EC2. Scientists have been doing it for years; now it's open for the masses.
Hi Leo,
ReplyDeleteRegarding EC2 stuff, I know that at least MSFT has been doing analysis on clusters for a very long time with Prefix. Also see the Saturn project at Stanford. The issue with doing the same for points-to analysis is that no one really knows how to do the more precise flow-insensitive points-to analyses in a modular fashion. Without modular analysis, communication overhead of intermediate states can become very high (discussed briefly in Zheng and Rugina's POPL 2008 paper). I've had some thoughts on how to get around this, but I think there is still research to be done. Nevertheless, I too await the cloud session at some future PLDI :-).
(This is Manu, BTW.)
I'm not sure that POPL's goal is to blow your mind about how we write programs, though the name BYMPL has some appeal.
ReplyDeleteI'm surprised you're not interested in Chilpala's fully verified compiler. (I'm not surprised that you're not interested in my contracts stuff. :))
Howdy Manu :) The points-to analysis I'm referring to is probably going to be at OOPSLA as "Strictly Declarative Specification of Sophisticated Points-to Analyses", but I may have that wrong. Their point was that, as we can phrase many of these analyses as datalog programs, we can do database-style optimizations on them instead of using BDDs. If we're iteratively expanding these relations (so a cyclic data flow system), I'd be hopeful. I don't really follow this area beyond the talks and some initial reading, but it has struck me as odd for awhile. Not having seen the workloads, it's hard to speculate, but if they're truly big, I'd expect enough isolation.
ReplyDeleteI'm fairly sated on cloud and parallel computing :) The change in hardware realities just hasn't fully percolated to day-to-day algorithms work, so I get surprised when I read about anything that takes more than a minute of CPU and doesn't exploit parallelism.
Mike: I actually skimmed your paper earlier this summer. Belatedly, congrats :)
I'm pretty convinced that authoring programs royally sucks relative to what better languages and tools could feasibly achieve. I want a new language features conference, not a grab bag of computer aided verification, compiler construction, etc.: there are already quality conferences for those. We always find new PL feature papers in SC etc.: why don't we see more within the actual PL conferences?
Hey, if you're interested in really thinking about this, we recently had a paper in SAS that might spark your imagination:
ReplyDeletehttp://domino.watson.ibm.com/comm/research_people.nsf/pages/msridhar.index.html/$FILE/sas09-andersens.pdf
We show that the constraint graphs for real-world Java programs have a sparseness property that makes analysis cheaper than the worst case. What does this mean for parallelization? Not sure, but maybe something :-)
Ok, let's try a real link: here
ReplyDeleteHmmm busy with deadlines for the next few weekends, but will give it a look after the first batch :) I'll probably be more fired up about it after OOPSLA. To be clear -- you're not worried about performance of flow insensitive, only flow sensitive? (If I remember right, a lot of security papers say something like k = 2 or 3.)
ReplyDelete