Is there a compelling reason why OO languages do not enforce immutability?

I can think of at least five reasons why making all objects immutable (in e.g. Java) would be a good thing: thread safety, class invariants only need checking once, easier parallelization, no defensive copying, possibility of more optimization methods, providing you get I/O etc. under control (e.g. shortcut fusion). There’s probably more.

What I can’t see is any compelling reason why an OO language’s type system would ever allow a user to mutate an object. In fact, “best practice” in nearly every OO language seems to recommend making objects either immutable, or mutate as little as possible.

The only reason I can think of is memory usage: creating and copying new objects all over the place, every time a set method is called, would be inefficient. However, Java’s String and wrapper classes for base types are all immutable, and with autoboxing, there’s a tonne of copying/creating objects everywhere, with reasonable levels of speed (for all but high-performance computing). Are there obvious reasons that I’ve missed?

That’s pretty damn compelling to me. Why should I construct a whole new object if all I want to do is poke some state?

Autoboxing is dumb.

Same reason you use a string builder instead of a string sometimes.

In “heavy-lifting” objects, deep copying makes things really slow.

Plenty of reasons. Immutability makes code easier to reason about, as an object’s invariants need only be established once, not whenever the object’s state is changed by calling a setter, or whatever. Further, there’s only one trend in hardware at the moment, and that’s adding more and more cores to processors and GPUs: “pure” code is simply easier to parallelize/make thread safe than “impure” code. Further, with enforced immutability, and a sophisticated type system, it makes it possible to import optimizations that are simply impossible in impure code which mitigate creating an excess of objects.

To echo others, there is no way this could scale.

For example, in a pure OO language with no primitive non-object types, imagine the humble Array object. If you created an array with a hundred slots, each time you put something in that array, it would become a new object under draconian immutability rules.

Now imagine performing a sort operation on an array of millions, and it’s clear that this wouldn’t work without cheating a little.

I imagine that you are looking for something stronger than the somewhat cavalier way OO languages like Java handle immutability, but not to the point where an array needs to be recreated every time an element is tweaked. No?

Immutability doesn’t come for free.

Immutable often goes hand-in-hand with garbage collection. (They are separate concepts so theoretically they are unrelated to each other but in practice, they are offered together in various languages.)

Garbage collection is only practical now because CPUs have increased in power. Since the 1980s, cpu hardware has progressed to the the point where we can now spare cpu cycles to run complicated garbage sweep algorithms without major performance penalties. Computer memory has also grown in size so that every drop of memory doesn’t have to be immediately released; it’s ok that you “waste” some memory because you have an old generation of variables/arrays still hanging around for a few seconds before the garbage collector reclaims it.

However, if you look at older history of OO languages such as C++ which was designed in 1980s, the cpu power and memory sizes were too small to make a language concept such as “immutable” practical. In 1989, a desktop PC had maybe 640K or 1 megabyte of RAM. My PC today has 6000 megabytes of RAM. If you only have 640K of RAM, it would be very painful to dedicate half of that memory footprint to a garbage collector program. Not much memory would be left for non-housekeeping items such as spreadsheets, word processing, etc.

Although it should be used sparingly, mutation is a very useful tool for customizing default behaviors. The key point is if you design the bit of code to be immutable, then it should be. In that sense, I think immutability would be a more sensible default, rather than mutability.

That’s true. It’s also the precise reason why you shouldn’t use an OO language, and especially not Java, for those types of problems. If you want to write pure code with no mutability so you can parallelize it, use Haskell. Or if you want to throw it across a jillion CPUs, Erlang.

This problem is well studied in the functional language community, where this problem is encountered all the time. The solution is not implementing arrays like you would in Java, but implementing Okasaki-style binary random access lists (see Okasaki’s PhD thesis “Purely functional data structures”, it’s actually a pretty interesting read, even if you aren’t a functional programmer).

Yes, garbage collection is now the default for any programming language. Including garbage collection isn’t a drawback, but an advantage. I’m not planning to create a language like C for writing drivers in.

OK, this sounds interesting. Can you expand?

Well, sure. But the reason you wouldn’t use a language like Java is precisely the topic of this thread. Using Haskell, you don’t get (out of the box) the advantages of dynamic dispatch, inheritance, the fact that nearly every programmer is trained how to decompose a problem into classes, etc. etc.

You can do this quite efficiently, in fact. You just need to have a sufficiently clever implementation and you can provide real immutable collections (maps, sets, vectors, and - obviously - lists) with near-constant-time changed versions for the obvious changes (and still keep the original versions around with no performance penalties).

Clojure does exactly this. here’s the relevant intro talk by Rich Hickey on how it works

ETA: Clojure isn’t really an OO language, even though it runs on the JVM and does use all kinds of objects under the hood, and you can create java-compatible objects using clojure. It’s just not all that intuitive or reasonable to do that in pure-clojure code.

Just to plug clojure a bit more; you do get dynamic dispatch and inheritance in the pure-clojure code. It’s just not class based (unless you want to work on native Java objects and classes, which you can).

It’s seriously interesting, and if you’re already familiar with Java, I do suggest you check it out. See

There are times (such as the aforementioned string manipulation) when immutability is a performance drain. Google’s new language, Go, is a concurrent language built for speed and it does not enforce immutability either. Java allows you to write software with nothing but immutable objects if that’s what you want.

Something I’ve been wondering about Clojure: if the compiler targets the JVM, how do they do tail call optimization?


Immutability is a special abstracted “fiction” that helps developers write better programs.

Ultimately however, that abstracted fiction lives on top of a finite set memory chips where the bits of 0’s and 1’s “mutate.” Even if we enlarge the RAM from 10 gigabytes to 10 petabytes, it’s still finite RAM. Uncompromising immutability requires infinite RAM with all memory operations completing in zero nanoseconds (for non-trivial programs.)

There will always be special cases where treating data structures as mutable is much less expensive (and more practical) than juggling symbols and references to keep up the appearances of “immutable.”

Ideally, the situations where you must abandon “immutable” will become more rare as time passes.

Short term answer: since clojure aims at as-close-as-possible compatibility with Java (i.e. function calls work the same way Java method calls do), you need to work around it yourself. You get loop/recur to do basic call-yourself semantics, and trampoline and similar constructs to do mutually recursive functions without blowing the stack.

In the long run, neither are ideal. IIRC the main objection against including TCO in the JVM is the security concerns. Personally I don’t give a rat’s ass about sandboxing, since I’m targetting server environments, so I’m not just running all kinds of untrusted stuff anyway, and I would be fine with just having it in “trusted” environments.

But it looks like there is some push for including TCO in the JVM natively, exactly because there are now quite a lot of languages running on it that would benefit from TCO, and there appear to be possible mechanisms to deal with the security. I don’t know what time-scale we’re looking at, or if it ever gets pushed through at all - though I think it’s likely to show up at some point.

Bah. Yet another reason to hate Lisp and all its demon-spawn.

Except for Emacs. Emacs rules.


The RAM needs to be unlimited only in as far as you keep references to old versions. Unreferenced versions can be garbage collected* - and GC is faster now than many people believe - I’ve seen some reports that claim that plain concatenation of (immutable) strings in Java is currently faster than using (mutable) StringBuilder. But in general, it does mean taking a processing hit - even very smart immutable collections incur some indirection hits compared to plain “bunch of aligned bytes” type structures.

  • And this does not have to mean, “copy a whole shitload of stuff and then throw the old version away when I don’t care anymore”. Again, see clojure collections for examples of how to do this pretty efficiently.

As a rule of thumb, I think of those cases as “I really should use C++ with pthreads”. :slight_smile:

RE: Clojure. Yes, it’s on my list of languages that I wish to check out (along with Agda 2, Scala, Qi Lisp, and a load more).

There’s always scheme, or pretty much any recent Common Lisp implementation, if you want TCO guarantees.

Or use Erlang. It’s as simple as scheme, has much better multi-core/machine semantics than anything I’ve used yet, pretty good pattern-matching stuff, and actual, horrible syntax. :slight_smile:

No argument here.