Pieter Lamers immutability is referring to the data or state of a program. Simple data like numbers or strings are by their natures “immutable” meaning they can’t be changed (for example you can’t just change the value of the number 3, it will always be 3). When you join two strings together, you don’t modify either one of them. Instead you create a third string which is the joined string. More complex data structures such as lists, hash tables, trees etc are usually mutable. That means you can change them by adding to them, deleting from them, editing their contents etc. Mutable data structures allow you to change their values.
Object Oriented Programming is built upon the idea of mutable state. You program in “smart” objects (a combination of program code and data) and your program evolves by changing these objects’ internal states and their relationships to other objects.
Functional Programming doesn’t take this approach in that program and data are not intertwined. Data is “dumb” while the program smarts are simply functions that work on it.
These two approaches are not so different if your functions are happy to simply change data willy nilly. In functional terminology this is called programming through “side effects”.
But Functional Programming has higher standards and seeks minimise mutating data in an uncontrolled way. Instead of fine grained changing of data structures, the data of the entire program should be viewed as one great big immutable lump. Every change no matter how small should cause the entire state to be replaced with a new state, a copy.
Hence immutability “changes everything” because every change causes a copy of the whole state.
You can imagine that right now a lot of programmers reading that would be cringing at the idea of copying everything on even a small change and are probably muttering something about “efficiency” and “scalability” under their breaths. Just take it from me that this is actually a solved problem and that immutable data structures can actually be copied extremely efficiently and scalably. The trick is in creating new structures by rewiring their connections and only copying the parts that change. This means that the old structure remains unaltered for as long as it’s needed and a new structure is created by some clever rewiring and only a small amount of copying.
Support for this is usually provided in the language itself or through a third party library. See Immutable.js for example.
Immutable data structures have huge advantages over mutable ones in terms of clarifying the state of a program as a whole and helping people think about its evolution through time. It facilitates ease of programming, ease of comprehension, ease of coordination (for example in multi threaded concurrency) and it eliminates whole classes of bugs.
So, I spent about a month on a low level algorithm in Swift, on top of objects and ARC and true Unicode strings, as fast as C code with nested structs and atomic allocs of whole rows and raw pointers. I sped it up from 60-70 times slower down to about 7 times slower, and could probably have gotten it to three times slower, but by that time I was using raw pointers and stuff that abandoned most of the safety of Swift. And it turned out that it was “fast enough” for the project a couple of levels back when it was only ten times as slow as C.
And programming in Swift is orders of magnitude nicer than programming in C/C++. And fast enough really is fast enough. But the performance gap between low and high level languages is absolutely real and not a “solved problem”.
A lot of this stuff comes from Clojure which is a Lisp mostly concerned with coding multiple processor concurrency a more tractable ways. I’m interested in it because it makes expressing business logic in web apps easier and more testable.
Getting rid of the reference counts for garbage collection was something I was continually fighting. They were hugely expensive, more than tripling the cost of walking a list.
I think one of the harder things for decision makers to understand is that the speed of programming will be their limiting factor more than the speed of the program. This isn’t true for all projects, of course, but there is a tendency for humans to vastly overestimate the cost of slow program execution relative to the cost of slow programming to the program in the first place. It’s similar to, perhaps related to, the fact that nearly everything ends up being more complicated than was expected.
Eliminating side effects helps enormously with debugging and thus with programming, but if you’re undervaluing the importance of speed of programming, and thus overvaluing the importance of program execution, one ends up using things like Functional Programming far less than should be the case.
Again, there are exceptions, where program execution matters more than speed of programming, but they are not nearly as common as management decisions imply.
Another way immutability changes everything is by changing the paradigm of computer programming. It turns out immutable programming is free of a whole range of common programmer bugs.
It also turns out that disallowing mutable data structures in programs makes those programs pretty easy to optimize, to the point where they are nearly as fast as equivalent mutable programs.
So people are starting to ask, if immutable data structures don’t have the drawback of poor performance, and it eliminates that range of common programmer error, why aren’t we using it more often?
So it is actually changing the entire software engineering landscape.
Ramin Honary it’s a really tough sell but I think that it’s really making head way now in the front end world. The main drivers of it seem to be coming from Facebook related projects.
Ramin Honary good point, difficulty programming often ends up with unclear thinking about where the bottlenecks are or how things should be done, and loses the benefit of theoretically faster execution.
I’ve heard this a couple of times now, but I know just a little too little about programming to understand. Can you explain in layman’s language?
LikeLike
Pieter Lamers immutability is referring to the data or state of a program. Simple data like numbers or strings are by their natures “immutable” meaning they can’t be changed (for example you can’t just change the value of the number 3, it will always be 3). When you join two strings together, you don’t modify either one of them. Instead you create a third string which is the joined string. More complex data structures such as lists, hash tables, trees etc are usually mutable. That means you can change them by adding to them, deleting from them, editing their contents etc. Mutable data structures allow you to change their values.
Object Oriented Programming is built upon the idea of mutable state. You program in “smart” objects (a combination of program code and data) and your program evolves by changing these objects’ internal states and their relationships to other objects.
Functional Programming doesn’t take this approach in that program and data are not intertwined. Data is “dumb” while the program smarts are simply functions that work on it.
These two approaches are not so different if your functions are happy to simply change data willy nilly. In functional terminology this is called programming through “side effects”.
But Functional Programming has higher standards and seeks minimise mutating data in an uncontrolled way. Instead of fine grained changing of data structures, the data of the entire program should be viewed as one great big immutable lump. Every change no matter how small should cause the entire state to be replaced with a new state, a copy.
Hence immutability “changes everything” because every change causes a copy of the whole state.
You can imagine that right now a lot of programmers reading that would be cringing at the idea of copying everything on even a small change and are probably muttering something about “efficiency” and “scalability” under their breaths. Just take it from me that this is actually a solved problem and that immutable data structures can actually be copied extremely efficiently and scalably. The trick is in creating new structures by rewiring their connections and only copying the parts that change. This means that the old structure remains unaltered for as long as it’s needed and a new structure is created by some clever rewiring and only a small amount of copying.
Support for this is usually provided in the language itself or through a third party library. See Immutable.js for example.
Immutable data structures have huge advantages over mutable ones in terms of clarifying the state of a program as a whole and helping people think about its evolution through time. It facilitates ease of programming, ease of comprehension, ease of coordination (for example in multi threaded concurrency) and it eliminates whole classes of bugs.
LikeLike
Thanks! Now I also get your pun better.
LikeLike
So, I spent about a month on a low level algorithm in Swift, on top of objects and ARC and true Unicode strings, as fast as C code with nested structs and atomic allocs of whole rows and raw pointers. I sped it up from 60-70 times slower down to about 7 times slower, and could probably have gotten it to three times slower, but by that time I was using raw pointers and stuff that abandoned most of the safety of Swift. And it turned out that it was “fast enough” for the project a couple of levels back when it was only ten times as slow as C.
And programming in Swift is orders of magnitude nicer than programming in C/C++. And fast enough really is fast enough. But the performance gap between low and high level languages is absolutely real and not a “solved problem”.
LikeLike
Peter da Silva sure but sometimes raw speed is not the most pressing problem. If it was there would be no scripting languages or virtual machines.
LikeLike
Like I said, ten times slower than C was fast enough. And Swift is already lower level than what we do most of our work in.
But there’s a difference between “fast enough” and “it’s a solved problem”. It’s not a solved problem, but the workarounds are usually good enough.
LikeLike
A lot of this stuff comes from Clojure which is a Lisp mostly concerned with coding multiple processor concurrency a more tractable ways. I’m interested in it because it makes expressing business logic in web apps easier and more testable.
LikeLike
Solved problem may be overstating it but I consider it a worthwhile approach. Even garbage collection is still controversial in the low level world.
LikeLike
Getting rid of the reference counts for garbage collection was something I was continually fighting. They were hugely expensive, more than tripling the cost of walking a list.
LikeLike
I think one of the harder things for decision makers to understand is that the speed of programming will be their limiting factor more than the speed of the program. This isn’t true for all projects, of course, but there is a tendency for humans to vastly overestimate the cost of slow program execution relative to the cost of slow programming to the program in the first place. It’s similar to, perhaps related to, the fact that nearly everything ends up being more complicated than was expected.
Eliminating side effects helps enormously with debugging and thus with programming, but if you’re undervaluing the importance of speed of programming, and thus overvaluing the importance of program execution, one ends up using things like Functional Programming far less than should be the case.
Again, there are exceptions, where program execution matters more than speed of programming, but they are not nearly as common as management decisions imply.
LikeLike
Another way immutability changes everything is by changing the paradigm of computer programming. It turns out immutable programming is free of a whole range of common programmer bugs.
It also turns out that disallowing mutable data structures in programs makes those programs pretty easy to optimize, to the point where they are nearly as fast as equivalent mutable programs.
So people are starting to ask, if immutable data structures don’t have the drawback of poor performance, and it eliminates that range of common programmer error, why aren’t we using it more often?
So it is actually changing the entire software engineering landscape.
LikeLike
Ramin Honary it’s a really tough sell but I think that it’s really making head way now in the front end world. The main drivers of it seem to be coming from Facebook related projects.
See Reason ML which is aJavaScript flavoured variant of OCaml https://reasonml.github.io/
LikeLike
Ramin Honary good point, difficulty programming often ends up with unclear thinking about where the bottlenecks are or how things should be done, and loses the benefit of theoretically faster execution.
LikeLike