I like to approach computers from the perspective of control theory. The state of a computer system can be condensed into a vector.
A computer seen as a state vector. The vector is large, but finite.
The length of this vector is not proportional to the available space in a computer, but to the significance of the stored data. For example, a NAS can have a shorter vector than a router, even if the former has magnitudes more storage space.
A computer's state vector has a wanted value and an actual value. The difference between these values is the (control) error. When this error is non-zero the computer is "behaving incorrectly."
When the control error increases with time the system is unstable. We can project this stability to two dimensions.
Computer stability in two dimensions. The ball is the computer.
The left computer lies in a valley and requires a lot of energy to be moved. When perturbed, the computer returns to the bottom. This computer is stable.
The right computer lies on a peak and will roll of at the slightest touch. When perturbed, the computer does not return to the peak. This computer is not stable.
Whenever a computer is first installed it tends to be stable. With time (following updates, program installations, various configurations, etc.) the system becomes less stable and less perturbation is required to make it unstable.
With time, computer stability decreases.
When stability decreases the state vector tends to grow.
The length of the computer's state vector can also be called its degrees of freedom (DoF). More DoF will make a computer unstable faster. But less DoF will decrease the use we can get from a computer.
For example, a game console has low DoF (it only plays games) but is very stable. An office computer has higher DoF (handling email, writing documents, connecting to meetings) but is not as stable.
Ideally, we want many DoF when the system is designed (we define what we want it to do) and few DoF when the system is running.
Many DoF during design (we can place the system where we want) and few DoF during runtime (the system will stay where we put it). A significant impulse of energy is required to make the system break through (or jump over) the barricades.
Here the lecture ends and the propaganda begins.
We can achieve very high DoF by designing a system with NixOS. During design we have a set of Nix expressions that we can version control. For an initial installation we know exactly where we put our system (and we can track it). The system configuration is immutable, so the state vector is significantly reduced during runtime.
With time all systems will invariantly walk, so the barricades drawn above are difficult to realize. But we can define a subset of our filesystem (part of out state vector) that will be persistent across reboots. We do this with impermanence.
With NixOS and impermanence we have a limited subset of DoF that move with time, that is easier to keep an eye on (databases, append-only system logs, SSH host keys, etc.) When we reboot all other DoF reset to the designed state: the ball is moved back to within the barricades (if it ever did escape them).
By this approach we can keep our computers stable for longer and save ourselves the effort of convincing them to stay stable. This comes at a cost of spending more time designing the system (writing Nix expressions). From personal experience this up-front cost is much lower than the debt that grows with an increasingly unstable computer.