back

Notes on Subsystem Decomposition

Systems have states, and they evolve between states over time, perhaps deterministically, perhaps not. This is the general systems model proposed by Ashby, and it works very generally. But systems theorists don't just want states, they want components and relations. How might one decompose a state into the states of a set of components?

To be clear, components are just subsystems. I talked about how to mathematically decompose a system into subsystems in my post on Ashby's model. I didn't say anything about how to choose one's subsystems, however. Just as one can choose whatever system-environment boundary one wants, one can choose whatever subsystem decomposition one wants. Choosing subsystems is akin to laying down coordinates; oftentimes that's literally what it is. One's choice of decomposition is usually taken for granted- most use a systems model which already has the components decomposed, and Ashby just says "sometimes states are vectors" and moves on. But one can always do a change of coordinates, and describe the same system in terms of a different set of components. I've seen very little attention paid to how exactly one should go about it: why this choice of components, and not some other choice? I don't think it's at all trivial. It would seem that in general there is no 'canonical' way to go about it, no particular choice of decomposition is 'forced onto' us.

I sit in a room, on a couch. In front of me is a mug sitting on a cupholder, beside a lamp. Of course, all of these things are just atoms. Why do I decide that this or that grouping of atoms forms 'one thing', as distinct from the rest? Why not choose the right half of my mug plus all the carbon atoms in this couch as 'one object'? It might sound obvious, but can you explain exactly why? Can you put it into words, or even better, into math? And remember that Ashby's framework doesn't even give us atoms, only states. Atoms themselves are their own choice of decomposition, which is separate from my choice of objects. This plainly shows that there may be multiple conflicting ways to decompose a system, despite each being separately reasonable. This is especially relevant when extending the systems framework to more 'abstract' cases, for example, the government, or the economy. What are the 'components' of the economy, and why?

Perhaps the most attractive thing about Varela and Maturana's theory of autopoiesis is that it at least tries to answer this question. Stafford Beer points this out in his preface:

"People who work with systems-theoretic concepts have often drawn attention to the subjective nature of 'the system'. A system is not something presented to the observer, it is something recognized by him. One of the consequences of this is that the labelling of connections between the system and its environment as either inputs or outputs is a process of arbitrary distinction. This is not very satisfatory." (Stafford Beer, in Autopoiesis and Cognition, p67)

I'm not comfortable enough with my understanding of the theory of autopoiesis to say whether they succeeded or not. (Note that I am referring to "components" the way Beer refers to "systems", and "inputs"/"outputs" are just "relations".)

I've been thinking about this issue for a while, and I've come up with some ideas. I will reiterate that this is not a matter of finding some 'correct' decomposition strategy. It's just a practical matter, I think. Which kinds of decompositions are the most effective for explanation, the most enlightening?

Independence-based decomposition

If two subsystems are fully independent, if there is zero interaction between them, then it is obviously useful to separate them. For example, a star which recently formed from a billion lightyears away, such that its light hasn't even reached us yet, obviously may be considered a separate component from us, with no loss whatsoever. The inside of an idealized box may be fully causally cut off from the outside, and so it is practical to consider the inside and outside as separate subsystems.

More broadly, a decomposition may be practical if the interactions between components are existent, but weak, especially weak in relation to the interactions fully internal to the components. My coffee mug is not really interacting with my lamp; technically, there is some miniscule gravitational interaction, and interactions mediated by light and air pressure and whatnot. On the other hand, the interactions between the parts which make up the mug, internal to it, are very strong. If I push on the right half of the mug, the left half will be pushed as well.

Complexity-based decomposition

One may similarly be interested in the complexity of the interactions, rather than simply their strength. A choice of components may be useful in the case where the components have complex internal interactions, but simple external interactions. For example, a heater may have some complicated circuitry inside it, but its effect on its environment may follow some simple pattern- release heat under this or that condition. Similarly, a computer has a lot going on inside, but its interactions with its environment are carefully restricted to specific input/output channels, so that the whole thing can be easily treated like a blackbox. This is the kind of decoupling which computer software/hardware designers are most interested in, I think.

Emergence-based decomposition

Sometimes, a system will have some emergent, higher-level description which allows one to coarse-grain its state space, to throw out most of its information, in favor of some simpler description. Examples include the emergence of classical mechanics from quantum mechanics, newtonian gravity from general relativity, and thermodynamics and fluid mechanics from particle motion. Fluid mechanics describes fluids, but not really solids- there are subsystems for which the higher-level model applies, and other subsystems for which it does not, both of which exist in the same system at the same time. Using this fact, it might be useful to separate a fluid from its environment, as its own subsystem.

In fact, one must do this in order for "coarse-graining" to work, as I understand. The state space of the system-as-a-whole, including both fluids and solids, can not be mapped to a fluid mechanical state space, because fluid mechanics doesn't know what to do with solids. One must first divide the system into the fluid part and the solid part, and then coarse-grain them separately.

(There is a technicality regarding the question of whether continuum mechanics can really count as a coarse-graining, since its state space technically has an infinitely higher dimension. Substitute it with some better example, if you'd like.)

This presents the question of overlap between emergent subsystems... If subsystems A and B are not fully distinct, having some redundancy between them, each referring in part to the same element of the system-as-a-whole, yet also having separate emergent coarse-grainings. This is interesting in that, I imagine, it could play a special role in the coupling between the components which one might be able to derive.

Relation to time

There is another concern one might have regarding decomposition- that is, its relation to time. Simple models of component-interaction systems do not seem to account for the creation of new components, or the annihilation of existing ones. I'm using particle physics terminology for a reason: accounting for the process of particle creation/annihilation (an example of this issue) required an overturning of particle quantum mechanics in favor of quantum field theory. Accounting for a variable number of components required reworking the model on a very deep level. One might imagine that accounting for this in the general systems case could, excitingly, bear similarly revolutionary results. (I'm also curious what one would find if they tried to develop a quantum field theory with a variable number of fields.)

The variability of the number of components must be derived from the same framework I proposed above. The particular decomposition must be variable. Should we just apply the same strategy for each moment? That would be the simplest answer, requiring ultimately that one keeps track of the regime of applicability of a particular decomposition, so that when the system exits that regime, one must re-decompose to account for it. From this, one might be able to develop a sort of adaptable decomposition theory, which accounts for regime changes without having to look down at the un-decomposed state.

I will have to think about this more. And read more books- maybe others have worked this out already.