Monday, June 29, 2009

The point of semantics

There are essentially three purposes for language semantics. Overall, as a smell test to guide language design: if you can't define a feature, it's probably bad. Second, to help other implementers of our language and programmers for the language -- unfortunately, I'd claim we generally fail on this point. Third, to help structure proofs about our language.

As part of the preparation for a paper submission, I'm finishing up my formalization of a subset of CSS 2.1 (blocks, inlines, inline-blocks, and floats) from last year. My first two, direct formalization approaches failed the smell test so Ras and I created a more orthogonal kernel language. It's small, and as the CSS spec is a scattered hodge-podge of prose and visual examples riddled with ambiguities, we phrase it as a total and deterministic attribute grammar that is easy to evaluate in parallel. Finally, to prove that it can be implemented efficiently (e.g., linear in the number of elements on a page, meaning no reflows), the grammar without floats leads to a syntactic proof, and the version with floats then only has to explain away some edge cases (using a single-assignment invariant, which can probably also be made syntactic).

All of this should have happened 10 years ago. However, the academic language design community, as per the norm, seems to have been late to the party. Instead, we have huge, slow interpreters that don't give the same answers and a generation of abused and confused artists and designers.

9 comments:

  1. Hi Leo. I tried to publish a semantics of Subtext, but one reviewer rejected it on the grounds that the only point of a formal semantics was to prove theorems, whereas I was using it to explain and document. Not everyone agrees with you about the point of semantics.

    ReplyDelete
  2. The semantics seem necessary for a solid PL, but not sufficient. In a paper setting, closely focusing on them should have a reason. Perhaps surprisingly, I think the PLDI and POPL community would be better served if they instead focused much more on usability studies -- are languages solving the higher-level problem or are they lipstick on a pig? That might not even require the semantics to show up at all in a paper!

    Unfortunately, the 'good' conferences don't balance productivity pl research against performance or verification driven ideas very well. Imagine reviewing papers. A type theory one doesn't even need an implementation, just nice proofs, while an optimization one takes an existing one, changes something about it, and measures the performance. A whole-cloth design paper should evaluate use by others of the language (e.g., to properly evaluate BitC, someone needs to write an OS and datamine bug counts!). The time required for what is nominally 'language design' seems very unbalanced, yet we submit to the same conferences.

    As an industry, we're suffering -- PLs don't come out of the academic PL community anymore, and I think artifacts of the publishing process are part of the negative feedback loop that has led to it. At a retreat recently, the industrial conclusion was that PL design is a top priority (other topics were bug finding / analysis)... and the academic side added the caveat that all you need is tenure to do it.

    ReplyDelete
  3. I agree there is a problem, but I disagree that usability studies are the solution. It is impossible to run a controlled study on anything but the tiniest of incremental language changes. No one has ever empirically demonstrated that OO is better than procedural programming.

    I think the problem is that PL research is trying to pretend it is some form of math or science, when it is mostly neither. It is a form of design that can be only judged subjectively. The seminal PL papers of the 70's and 80's could not be published today, because they are often just an informal description of an idea and some examples showing why it is cool.

    ReplyDelete
  4. This comment has been removed by the author.

    ReplyDelete
  5. Since GOTO was considered harmful (~68) and CFA0 (88), the computing industry grew by magnitudes: we're already deluged by ideas (which are cheap). The competitive nature of top conferences helps introduce a quality filter and spotlight. Another filter is on authors by adding in some hoops for authors to go through. If the author doesn't feel strongly enough about their work to go through them, why should the reader? (... though this filter endorses painfully incremental but meticulous papers, as is becoming common -- the codebase or formalism becomes a boring hammer).


    Usability and case studies aren't THE solution, but for many (most?) pl papers, they'd be revealing as to what problems the papers actually solve and what needs to be worked on. It's hard for me to treat a lot of not only papers but *subfields* with interest because of this. The bar isn't necessarily that much higher; publishing rates might go down 2x, but I suspect publishing quality will go up much more. PLDI/POPL are moving in this direction, as noted in the first paragraph, just I don't agree with the hoops they're putting in. Usability and case studies are both revealing and act as rate limiters.

    I'm not sure what you means to review more subjectively. Current conferences are already dishearteningly prone to trends, and I've started to get reviewer comments (good and bad, mistaken or not) that reek of favoritism and pet interests -- I don't like this sort of personal discretion and try to avoid it. What do you propose?

    ReplyDelete
  6. This comment has been removed by the author.

    ReplyDelete
  7. Lee, I don't really understand what you're proposing. Caching content across users, improving the interactive experience, etc.? Opera has been simplifying pages and I know others are working on precomputing static content.

    Long-term, however, I don't think these solutions are very interesting for the typical case. Something like X does a lot of it already: I think we'll see a divergence between very thick and very thin clients, so I'd target one or the other. Inbetween scenarios are fine for startups and pushing papers until then.

    As for energy and power efficiency.. I always find them tricky to estimate and compare :) There's also costs associated with using proxies, especially once we add computation and per-client computation.

    ReplyDelete
  8. (That was rather poorly worded and poorly thought-out. Take two.)

    Most pages tend to be created not as a static one-off page, but by templates, with a grid of boxes that content gets shoved into. Most of the headers and footers per website session, for example, tend to be fairly consistent.

    How much do you think, a priori, it would speed things up to save the parse tree and the stylesheet cascade for each page in cache, and then reuse that work when encountering another similar page?

    One of the things that surprised me, not knowing much about reflow implementations, is that for NoDictionaries.com, where there's a prominent slider you can drag to reflow the page to show more or less Latin vocabulary, reflow in inline mode was 1-2 orders of magnitude slower than block mode, and Safari about a half-order of magnitude faster than IE, which was about a half-order of magnitude faster than Firefox 3.1 at the time.

    ReplyDelete