There are several historic takes. A natural language interface is one (code search engines are an interesting new take on that one), and visual programming another (Conal Elliot's tangible programming is wonderful, as is all his work). Mira Dontcheva seems to be somewhere in the semantic web camp, focusing on user-oriented programming, and Rob Ennal's current take with MashMaker seems to really live up to Mira's general goal: inferring the program by demonstration. Traditional PBD failed because the domain was too large, but these two seem to be able to find sweet spots in more limited domains. Conversations with Shriram suggest he sees merits to NLP/ML based approaches either now, or soon.
However, no matter what we do, there will still be code at some level. Maybe there will indeed be a meta-circular PBD system in the future, but I'm not dropping out of grad school after a semester out of fear and taking up plan B with Matt (starting a punk band in Japan). So, how do we make a language more usable?
This paper takes another common tack: coding theory. While it doesn't precisely make sense (imagine taking all programs of the same length, and permuting the assignment from syntax to semantics), the intuition is clear: you need to be able to say what you want without struggling to encode the necessary language. There are a lot of cognitive simplicity principles out there, so I was unconvinced due to the lack of experimental validation in the paper about the particular choice and left feeling the proposal was rather arbitrary. In communication, too concise is bad: repetition helps in case a word was missed. On the other hand, there's the experiment that showed we can have roughly 7 ideas going on at the same time. At this point, compare Haskell to Java to some configuration language - things aren't so simple. Tack on abstraction capabilities, and of what form, and we lose even more correlation with length. Additionally, there's the problem of learning: a DSL can be great, but the learning curve for edge cases in its semantics can be killer, which is a popular argument against macros (which give new control constructs, as opposed to functions sans crazy approaches like Flapjax).
Rather than a mathematical theory, I'm curious about axes, particularly with how orthogonal they are to each other and how much of an impact they have. One breakdown could be:
- Data abstraction
- Control abstraction
- Functional abstraction
- Syntactic abstraction
- Size (APIs)
- Learnability (language levels, similarity to previous systems)
- 'Smart' defaults (I'm not even sure if this is always good)
- IDE (code completion, version control, code generation making writing quick and reading long)
- Tangibility (REPLs would be a weak form of this)
- Static analyses
- Dynamic analyses (mostly as a jab at the static camp)
- Search (smart as in Prospector or just google)