QUINTESSENCE Five elements. One fantastic journey.
"The show is a spectacular feast for the senses that combines modern dancing, Russian folk music, and clownery that together create a truly unique theatrical experience."
Quintessence features the combined efforts of several renowned performing troupes:
• World famous clown-mime theater Licedei,
• Acclaimed Russian ambient folk singers Ivan Kupala,
• Pioneering folk-modern troupe Firebird Dance Theater,
• California's home grown Infamous Siberian surf rock band Red Elvises
I saw a voicemail from Ilya (geek in residence @ Lucas Arts now) about this 15 minutes ago, and, starting 5 minutes ago, I'm a proud ticket holder.
As an alternative list of why I like my life style here, my tentative winter break plans:
- Take another stab at dependent type theory
- Work on my guided transparency + imperative FRP paper
- New Years in NYC
- Fix my structural type-feedback JS analysis
- ParLab retreat + poster, start thinking about frp, concurrency, asynchrony, & parallelism
- Check out the last day of POPL
- Either muck with Haskell or Erlang, attempt a JS concolic tester, or automatically find a particular class of security flaws in Firefox
- Start reading papers for prelims (reattempt PI-calculus, Hoare logic, & abstract interpretation?)
I'm reviewing case studies by our compiler students of using a simplified variant of FRP for a project - very interesting reading. I rarely see PL papers that involve user studies, so this is an interesting experience. Knowing that a <pick-favorite-language>-genius can write a fully featured, verifiable, horizontally scaling web app in <small-rational>K lines of code in a language/framework they designed doesn't help me much. However, what would? Structuring a user study, getting users, and then getting meaningful results, is hard. If the interest is in getting feedback from 'average joe' programmers, not expert programming language enthusiasts, then, in general, I should only expect quantitative, not qualitative, statements from users. From these, I must also guide them in what to answer, and this implicitly already biases them in their answers. For example, I'm interested in how many errors are due to the distinction between discrete event streams and continuously valued behaviours - yet users may incorrectly attribute these because they don't know better. One difficult but effective approach I've recently seen performed by Dan Grossman for his type error finding paper was to actually instrument a compiler to record user actions, and thus he was able to do a detailed postmortem analysis to achieve a good categorization. A time suck, but I believe going this extra mile is part of what separates the scientists from the mathematicians in programming language theory. At least in the conferences proceedings and journals I tend to read, the latter far outnumber the former, and I see it as a field of artists with similar taste ("schools of thought"), not actual scientists.
At this point in time, I feel we need way more of the former than we currently have, despite the freedom of the latter and their tendency to focus on foundations (which I believe is still important). Even in good old program analysis, where one can compare the results of one's tool against others, it seems most papers make a token, biased effort - Lhotak's "Comparing Call Graphs" demonstrated to me recently again just how susceptible analyses are to environmental conditions, stressing the pains one ought to go to in preparing fair comparisons. The lack of such care in many others papers in supposedly 'top tier' conference papers embarrasses me as someone trying to enter the field. Whenever I review a paper and see someone include this sort of result (or neglects it), I am always tempted to brush it aside as no new technique is presented, though when I step back, this information may very well be much more useful to others than whatever technique is actually being presented, as it helps guide future focus. </ENDRANT>