Monday, January 14, 2008

Buzz Buzz Buzz

I took a few weeks off to visit the family in RI, hang out in NYC, and make some progress on my mixed imperative/functional reactive programming paper, but then came back for some stimulating times. While most PL researchers in the bay area this past week were at POPL, a few slunk away to our (unfortunately) co-scheduled soup to nuts parallelism academia+industry retreat in Tahoe. Beyond experimenting with elongated, controlled falls down the slopes, I heard a few interesting perspectives on the issue. I didn't sign any sort of NDA, but suspect anonymity is the way to go - instead, imagine these as Sun, NVidia, Microsoft and Intel level superstars as well as academics working on topics from mixed vector/threaded architecture to real-time music synthesizers to black hole simulators to auto-tuners to static race detection.

I found the following tidbits neat:
  • We're in a multicore world for the next 5 years - infrastructure to be manycore after that
  • New features need economic motivation (ex: hw performance probes for non-scientific hpc uses)
  • Is parallelism the problem, or IO?
    • Corollary: easier to augment (ex: machine learning optimizers) than restructure
  • Languages
    • have ~1 devoted chief designer
    • break the generation rule for DSLs
    • have a progression from frameworks to DSLs
      • can this be automated?
      • interesting to see a percentage breakdown between developers of frameworks, and developers using frameworks
        • out of these, how many can see up and down the abstraction stack?
        • ... and how far?
    • GPLs follow the generation rule
    • feature view:
      • what is the roadway for feature cross-pollination?
      • everyone wants generics/parametric polymorphism now - what parallelism primitives will be mainstream tomorrow? STM, textual barriers, maybe parallelism, ...
  • GPGPU is out, GPU Computing is in: don't fit general constructs on the gpu, but find novel tasks appropriate for gpu constructs
    • Corollary: layers dealing with explicit parallelism are already fairly low-level and thus potentially needs this level of knowledge and control
  • In practice, auto-tuning is effective enough to provide almost a magnitude of performance gains
  • Layers concerned with responsiveness (music, UI) and performance (most hpc) are often separate
I received primarily positive responses to my parallel browser scripting language proposal poster, and got to meet Jan Maessen of Fortress fame (first met him when he gave a fun talk at NEPLs awhile back on associativity/commutativity type declarations for improved composition and parallel scheduling). Fortress is pushing as much as it can into the libraries, including support for parallelism, so they're dealing with a lot of the problems we ran in to with Flapjax, and I benefited from hearing about alternative PLT perspectives on it. Somehow Chris steered the conversation towards Kahn processes, so good times.

This week: paper writing, browser benchmarking & road-mapping, and movie machine learning side project.

No comments: