People design new programming languages all the time. Each new language is awful in so many ways, but people don’t learn; the next is even worse. Here I’m going to explore how languages go wrong, and why, and a few cases where a language got something profoundly right — usually by contrast to the rest who got it wrong. I’ll draw examples from lots of languages.

Many languages embody mistakes that are forgiveable, because they were doing something for the first time, or when people didn’t understand the consequences so well. FORTRAN, LISP, COBOL, C, Algol — anything that predates 1972 — deserves a free pass. Everything was so hard, back then, that getting anything right was a triumph. Other languages get things wrong, but can’t help it. C++ adopted C’s mistakes, but had no choice about it; upward-compatible means bug-compatible, perforce. Languages that copy those mistakes, with full benefit of hindsight, have no excuse (cough Java cough). Languages that copy what earlier languages actually managed to get right, and get them wrong (cough Java cough), likewise deserve no mercy. Languages that try something interesting and new almost always get it tragically wrong. That would not be so bad if anybody would learn from it, and do it better next time. Sometimes they do. It happens rarely enough that we can afford to devote individual posts to what somebody, somehow finally got right.

People posting blinkered defenses of their favorite language will be mocked with little more mercy than their language got, albeit with grace, humor, and style. Feel free to join in skewering idiocy, but mind Muphry’s Law. It’s hard to write about idiocy without calling attention to our own. We all have plenty, but we don’t all need to display it all the time.