I found the following tidbits neat:
- We're in a multicore world for the next 5 years - infrastructure to be manycore after that
- New features need economic motivation (ex: hw performance probes for non-scientific hpc uses)
- Is parallelism the problem, or IO?
- Corollary: easier to augment (ex: machine learning optimizers) than restructure
- Languages
- have ~1 devoted chief designer
- break the generation rule for DSLs
- have a progression from frameworks to DSLs
- can this be automated?
- interesting to see a percentage breakdown between developers of frameworks, and developers using frameworks
- out of these, how many can see up and down the abstraction stack?
- ... and how far?
- GPLs follow the generation rule
- feature view:
- what is the roadway for feature cross-pollination?
- everyone wants generics/parametric polymorphism now - what parallelism primitives will be mainstream tomorrow? STM, textual barriers, maybe parallelism, ...
- GPGPU is out, GPU Computing is in: don't fit general constructs on the gpu, but find novel tasks appropriate for gpu constructs
- Corollary: layers dealing with explicit parallelism are already fairly low-level and thus potentially needs this level of knowledge and control
- In practice, auto-tuning is effective enough to provide almost a magnitude of performance gains
- Layers concerned with responsiveness (music, UI) and performance (most hpc) are often separate
This week: paper writing, browser benchmarking & road-mapping, and movie machine learning side project.
No comments:
Post a Comment