Powerful languages need pointers. Some languages try to make every name a pointer, and then pretend not to have pointers. We’re not fooled. Many make everything a pointer except the most useful things, as in Lisp and Java. Languages that make it hard to tell whether something is a pointer or not deserve a whole ‘nother posting. Here I’m going to talk about dereference syntax.

Dereference syntax was invented for assembly language. It was an elegant way to express an addressing mode, to set a particular bit in an instruction word. One common notation was a prefix * (asterisk). Others used ‘@‘, or parentheses, or brackets. Anything worked fine in assembly code, because there were no expressions to speak of. Prefix dereference was easy to understand, and caused no trouble.

When actual languages came along, prefix dereference operators were familiar and conventional, so they went in without much thought. It was just the obvious way to do things. It caused trouble in precursors to C, with expressions like (*p).i, leading to an additional operator to allow p->i. Pascal, wonder of wonders, got it right, with a postfix operator, thus p^.i, but a little too late for C to learn anything from it.

The mistake is revealed when we see constructions like (*p)->i — the new operator didn’t really help. In Pascal, of course, this would be p^^.i, without parentheses, and without the superfluous operator ->. Now, as syntax embarrassments go, this is a small matter. Mistakes are usually easy for the compiler to catch, and it doesn’t make most code much harder to read. To copy C declaration syntax, as in Java, is much worse. Still, why copy a mistake, when you can just get it right? C++ could have added a postfix dereference op@ any time, but it would have added complexity, not reduced it. Google’s proprietary language Go improves on C’s declaration syntax, but copies the much more easily fixed dereference mistake.

C gets a free pass. Not so every language that apes C syntax without C compatibility. For any such language, prefix pointer dereference syntax is an embarrassing mistake. Pascal got so few things right. Let us at least acknowledge and carry those forward.


People design new programming languages all the time. Each new language is awful in so many ways, but people don’t learn; the next is even worse. Here I’m going to explore how languages go wrong, and why, and a few cases where a language got something profoundly right — usually by contrast to the rest who got it wrong. I’ll draw examples from lots of languages.

Many languages embody mistakes that are forgiveable, because they were doing something for the first time, or when people didn’t understand the consequences so well. FORTRAN, LISP, COBOL, C, Algol — anything that predates 1972 — deserves a free pass. Everything was so hard, back then, that getting anything right was a triumph. Other languages get things wrong, but can’t help it. C++ adopted C’s mistakes, but had no choice about it; upward-compatible means bug-compatible, perforce. Languages that copy those mistakes, with full benefit of hindsight, have no excuse (cough Java cough). Languages that copy what earlier languages actually managed to get right, and get them wrong (cough Java cough), likewise deserve no mercy. Languages that try something interesting and new almost always get it tragically wrong. That would not be so bad if anybody would learn from it, and do it better next time. Sometimes they do. It happens rarely enough that we can afford to devote individual posts to what somebody, somehow finally got right.

People posting blinkered defenses of their favorite language will be mocked with little more mercy than their language got, albeit with grace, humor, and style. Feel free to join in skewering idiocy, but mind Muphry’s Law. It’s hard to write about idiocy without calling attention to our own. We all have plenty, but we don’t all need to display it all the time.