Brinkster deleted my account (shame on them!) and I might have a copy of the old database, albeit a bit stale around the edges (shame on me!), that I can port over.
Recently, in Leo:
Volleyball has been going exceedingly well. I'm not really at the spiking phase, but I can block and play front rather well. Backrow still is a relatively weak for me, but then again, it is for almost everyone else too. We had spectators and beer this time, both of which helped us destroy the undergraduates. Not sure what is in store for the weekend, but hopefully some time in the city (see EunYee before she heads back to Texas? or Ilya now that he's @ Lucas Arts in the Presidio? Or finally go to Mt. Diablo and open up a bottle from Peju... ).
On the more technical side:
1. I began a research project in earnest (or as earnest as I can be if I switch to parallel frp come January). Given some dynamic data, I want to know a mixture of logical and probable truth - the dynamic nature of analysis hints at some over-approximation, except also some likeliness. Right now I'm applying this to basic schema learning (regular tree automata anyone?), but I thought of corresponding a neat trick for bottleneck/contention analysis of parallel systems. If this project works out, I would like to shoehorn it onto an implementation of CouchDB such that you can always get a likely type description of view of your data, and onto a goal-oriented concolic tester (schema evolution and bug finding becomes interesting at that point as well).
length: 'a list -> int, expand function definitions:
curriedAdd: int -> int -> int. Never really thought of how that plays with subtyping (perhaps something about covariance/contravariance?) but we'll see.
There are also two problems I got stuck on today:
1. Dynamic analysis is currently the way to go for testing parallel apps, often with static algorithms bolted on top where a valid path is found dynamically and reorderings of it are analyzed. That gives you two axis of search: along the path, and off the path. On the path is tractable, but once you're done, you must move on the other axis - except, my suspicion is that you'll likely end up on an 'equivalent' path when you randomly seed again. Last week, I wrote a concolic tester with the following results (# static branches in source covered out of 170 possible branch points, over paths through program traversed):
However, this was without reordering on a path. I'm starting to appreciate this problem more, and this particular incarnation seems particularly thorny.
2. AJ is submitting a neat paper on churn in OO: if there is constant heap allocation in your overly-structured oo program, you might find correlations for exit points, suggesting where you should automatically rewrite the byte code to allocate on the stack instead of heap, and thus garbage collect for free. Part of it raised an optimization problem: in the program tree where each node and edge has an integer value, and a path is scored by its alternating sum product (n * (e + n * ( ... ))) divided by path length, what is the maximal or minimal path? It's trivial without the division, but I didn't see a solution with it (one thought was that if you knew bounds, you could divide to get [0-1] values and thus some convergence argument).
Jono had some interesting intuitions about first class events that felt something like polymorphic exception handling that I want to muzzle over (even though we all know FRP is the way to go!). Maybe I'll pick Ben's brain a bit.