Are we teaching the wrong languages?
Through an email list I read (appropriately named “geeks”) this talk by Rich Hickey, author of Clojure, recently came to my attention. It’s a bit long and in some places repetitive — certainly not the best example of how to give a technical talk. Verbal tics like saying, “right?” all the time bug the hell out of me. But the content is legitimate, intense, gets into psychology and perception, and is challenging mostly because it asks a fundamental question that few people really look at: Do today’s dominant programming languages (Java, PHP, Python, Ruby, C#, etc) and their object-orientated conceptual model serve us well? Parentheses jokes aside, there are CS concepts that can’t be explored without a functional language and higher-order functions.
Up until today, I was shelving this as another interesting but academic computer science talk, irrelevant to practical business and engineering, with the minor mental note that it’s pretty cool that someone built a functional language that runs on top of the JVM. That caught my eye — partly because I was unaware of Scala, Groovy, and JRuby until now, but mostly because making all the language features available at runtime (that is, having a working, complete eval function) is a particularly cool hack that sets Clojure apart. I mean, it’s a nice trick and I’m not sure that most working programmers even understand what it takes to accomplish such a task. I know I haven’t written a compiler myself. (Although, because of SICP, I have written a metacircular evaluator! And now you see my bias toward these sorts of languages for teaching.) Overall, I wouldn’t have a lot to say about this except for a recent post on the Google Code blog about running Clojure on App Engine.
Now the picture is starting to coalesce for me: What Rich is talking about is about writing a new kind of code, the kind that works naturally with the distributed, parallel, concurrent platforms we know are required for scaling. To a lesser extent, this is an issue even for single-thread applications in a multicore environment. But the object hegemony has an answer, and it’s a pretty powerful one. The mapping between object persistence and relational databases seems, to me, fully dialed by Ruby on Rails. And so with that magic in place, we can push the work of scaling down into the database, use it as the locking transaction mechanism, distribute it across multiple servers, and then load balance the front-end and static content across other web servers. We can do that. We know how to do that. Someone will sell it to you in a cloud. But do we really want to be doing that? Do the languages and programming metaphors we’re using in the dominant commercial languages really help us understand and take best advantage of the hardware metaphors we’re using? (It’s metaphors all the way down until you get to turtles.)
I wonder if the last of the procedural programmers felt this way. A change is coming. It seems big and hard to understand and potentially misguided. But moreso, I wonder if the teachers of the last of the procedural programmers felt the way I feel now. Did they look out at the world and wonder if they were training minds to think in a very particular set of metaphors that would soon lose favor to more powerful abstractions? Or perhaps I’m being too dramatic about this, and we will simply see more languages, perhaps functional ones, well-suited to parallel/distributed computing take a place in the software development world alongside the object-oriented application development languages and the realtime languages for lower-level stuff. Either way, I do wonder if I am doing my students a disservice by not teaching them the concepts available from a functional language.