Negative Feedback

In order to go to a desired place, you need to make the distance between you and that place zero.

Negative feedback control systems (ie. cruise control) measure deviations from the 'ideal' or 'desired' state and try to reduce them. Interpersonally, negative feedback is much the same, only the 'ideal state' is much more loosely defined (though it can be measured by things like how other people treat you as the result of how you are perceived).

Feedback loops and systems work by targeting a goals for their behavior, measuring responses, and tweaking their process in pursuit of that goal.

With feedback loops, a change in one part of the system (be it a calculation, an input, or a setting) always results in change elsewhere. Systems that change in response to exchanging stim are dynamic systems. These systems can alter themselves in order to maintain themselves. This is pretty meta but the point here is that you can make a much more robust 'equilibrium' if you make a system flexible enough to adapt to changes in the environment as well.

To go even crazier, you could even make a system that can adapt in response to changing goals.

The cruise control in most cars do all of this; they works on downhill or uphill (changing environment), and you can change the reference speed (changing goals).

Emergent Behavior in Systems

So systems can be designed to find equilibrium, or even more be able to find a variety of equilibriums in a variety of environments. A sociologist in 1950 named Homans argued that groups and societies reach equilibria as a result of social behavior, but that the equilibria are not the goals. That even though an equilibrium is reached, the goal is not to reach an equilibrium.

I want to examine this from a control theory engineering perspective and also from a game design one.

In control theory engineering, all systems have a response. There are (pretty much) two types of responses. Unstable responses and stable responses.

Unstable responses break irreversibly. There are too many wolves and they eat all the rabits and then die of starvation because there is no more food. The satellite crashes into the moon. Mathematically, the response shoots to infinity, negative infinity, or some limit and the system must be reset (if that is possible). This situation is rarely a goal except for explosive situations like atomic bombs.

Stable responses, on the other hand, reach an equilibrium. Some systems are stable in some situations and unstable in others (like the wolf-rabbit example). Some systems are stable for all inputs. Some systems are stable for exactly one input.

So is stability not the goal? It certainly can be if the system is designed with stability in mind. That said, some systems aren't. Some systems are designed to be stable without any particular requirement of what the 'steady state' should look like. Either way, the equilibria that result are a result of the design (purposed or not) of the system. The equilibria are NOT the goals of specific parts of the system, neither is stability. The economy in World of Warcraft is a great example of this. No player intends to crash the economy, nor do they intend to keep it stable, instead they intend to make as much gold as possible. They usually don't crash the economy, but when they do, it is likely the developers need to change something about the system to make it stable again.

Return To Blog

Lend Your Magic

If you are interested in working with us full or part time, you may submit a general application below.
SUBMIT APPLICATION