Cognitive Science

Dynamical Systems Theory

Introduction

Dynamicism is, in short, the position that Dynamical Systems Theory (DST) is the best way to explain the behaviour of cognitive systems. Its proponents contend that other alternatives are needlessly complex in that they require some quite bulky representational structures in the case of classicism, or that they are actually just dynamical systems anyway in the case of connectionist networks. I will argue that this theory, rather than presenting a challenge, adds another useful tool to the arsenal of the cognitive scientist but ultimate does not provide a satisfactory explanation of cognition. But first you’ll need to know what DST is, what it means to say that a system uses representations, what a connectionist network is, and what it means to say that a system is computational. I then proceed to discussion of my thesis. Here I will show what it is that the dynamicists are actually arguing for and against.

Dynamical Systems

DST is a mathematical tool employed to describe the behaviour of a coupled system in terms of how its trajectory through a state space evolves over time. Each part of the system is defined by a quantitative variable. Those variables change over time in an interdependent fashion. That is to say the change in any one variable influences the change in all other variables. This is shown through the system’s rule of evolution. (Chemero 2001)

The coupling of the system is in the closely interdependent nature of the parts of the system, and its environment. Each variable is simultaneously determined by and determining the values of each other variable. It is this complex interaction that gives rise to the state space trajectory, and is this interaction that is defined by the rule of evolution. The state space trajectory is a vector through an N dimensional plane, where N is the number of different interdependent variables in the system. (van Gelder 1995)

The model system first introduced by van Gelder is the Watt Governor. Engineers in the late 1700s needed a device to ensure constant speed in a steam engine. The problem was that as the load on the system changed, so did the engine speed. Watt devised a mechanism consisting of a spindle which was connected to both the engine’s flywheel and a steam valve. The spindle consisted of two arms with weighted balls on the ends. As the flywheel spun it would cause the arms to rise through centrifugal force. Those arms were connected to the valve in such a way as to open or close the valve depending on the direction of travel of the arms. As the valve closed, less steam entered and the speed of the flywheel fell. As the valve opened, more steam entered and the speed of the flywheel increased. There are three distinct parts to the governor which can each be defined by quantitative variables. As one variable changes, in virtue of the construction of the system, each other variable changes in accordance with the system’s rule of evolution. Van Gelder thinks that cognitive systems are much more like the Watt governor than the more traditional computational systems found in the section entitled computation. (van Gelder 1995, see also Clark and Toribio 1994, Clark 1997, Bechtel 1998, Bermudez 2014, Shapiro 2011 for additional accounts of the governor)

 

 

 

 

Representation

For some object to represent another it must ‘stand in’ for that object in some process. The most clear and useful definition for this context comes from Haugeland via Clark and Toribio (1994) it defines the conditions under which a system can be said to be representational:

“(1)            It must coordinate its behaviours with environmental features which are not always ‘reliably present in the system’ via some signal.

(2)             It copes with such cases by having something else (other than the signal directly received from the environment) ‘stand in’ and guide behaviour in its stead.

(3)             That ‘something else’ is part of a general representational scheme which allows the ‘standing in’ to occur systematically and allows for a variety of related representational states.”

Clark and Toribio go on to say that (2) identifies as a representation, anything that stands in for something else. That is to say that anything that can act as or fulfil the role of something else in the system is a representation. (3) I take to refer to something like the idea of function described by Bechtel (1998). According to Bechtel, a stand in must also have the function of standing in. Without that function there is no representation. There must be some other process that makes use of the information that it represents because it represents that information.

Connectionist Networks

A connectionist network, an abstracted and idealised way of talking about real neural networks, is a group of model neurons which are interconnected and serve some specific purpose. Each neuron has a quantifiable activation level, and each of its connections have a quantifiable weight. Connection weight refers to the strength of the signal that is transmitted between the neurons. The activation level of a neuron is a function of all of its weighted inputs, and some threshold which determines the minimum input it must receive before it will activate. They are conceptualised as being arranged in layers. The first layer is the input layer which receives some stimulus either from another network, or from some sort of sensory organ or device. The final layer is the output layer, this either sends an input to another network or to some kind of motor control module. The layers in between are referred to as the hidden layers. (O’Brien and Opie 2006)

A representationalist says that those networks are analogue computational devices. They take a pattern of input, transform that pattern across one or more hidden layers, and at some point output the resulting pattern. The type of representation that is happening is referred to as second order resemblance. This kind of resemblance is not one of physical properties but one of relations.  The clearest example of this is in a model system designed to detect the colour of light hitting a group of photo receptors. 61 units in the input layer encode a pattern of input activations which then determine, as described above, the activation values of a single three unit hidden layer. When the activation space of that hidden layer is mapped in three dimensions there is a clear structural resemblance to the represented object. O’Brien and Opie say “One system structurally resembles another when the physical relations among the objects that comprise the first preserve some aspects of the relational organisation of the objects that comprise the second.”(2006, original emphasis)

 

Computation

Computation from the classical cognitive science perspective is rule governed manipulation of symbols. When it comes to cognition, these symbols represent information about the environment of the agent in such a way as it can make use of that information in a manner that is semantically appropriate. To be semantically appropriate is to have some meaning to the interpreting agent. By itself the symbol ‘5’ has no meaning, however in answer to the question ‘if I have two apples in one hand and three apples in the other, how many apples do I have?” the symbol means ‘five apples’. The agent performing the computation need not know what an apple is or what 5 means, as long as its rules governing symbol transformations respect the fact that 2+3=5. (Rescorla)

Van Gelder has a bit more to say about computation. He describes a computational approach to the governor problem and based on that description (adapted below) he lists four qualities of a computational system. Representation, computation, sequential and cyclic operation, and homuncularity. (van Gelder 1995)

The computational governor functions using the four qualities above, following these steps:

  1. Measure engine speed (Sa)
  2. Compare Sa to the desired speed (Sr)
  3. If Sa = Sr return to 1. Else –
    1. Measure steam pressure (Ps)
    2. Calculate required change in Ps
    3. Calculate throttle adjustment for 3(b)
  4. Make change for 3(c)
  5. Return to 1.

According to the earlier account of representation, the variables Sa, Sr, etc. are standing in for parts of the system. They are digital representations of engine speed and so on, and they are doing so as part of a larger representational scheme. By computation here he means calculation such as that at 3(b) and 3(c). The operation is sequential in that it must follow the steps precisely in the order described, any step taken out of order or not taken will result in an error. It is cyclic in that it continuously loops through the process. It is homuncular in that the system and the process are both broken down in to smaller parts which must achieve their task and pass the information to the next or to some central control unit. (van Gelder 1995)

 

 

 

 

 

Analysis

So what does all this mean for dynamicism as regards cognition? To answer that we must consider first, whether DST undermines the position of classical cognitive science. Then, whether describing connectionist networks as computational is appropriate. Finally a brief discussion of what it means to say that DST tells us ‘what’ but not ‘how’. These considerations should hopefully lead us to the conclusion outlined in the introduction. As a preliminary though, I think it useful to state exactly what the dynamicist argument is, according to Shapiro (2011).

‘1. Representations are stand ins for actual objects.

  1. An agent is in continuous contact with the objects with which it needs to interact.
  2. If an agent is in continuous contact with the objects with which it needs to interact, then it does not need stand ins for those objects.

Therefor, an agent has no need for representational states that stand in for actual states.’

Undermining Classicism is no simple task. But DST certainly seems up to the challenge. Consider the accounts of the dynamical governor and the computational governor. On the one hand we have a tightly coupled system where each part influences the behaviour of each other part and thereby the whole system. Consider that in the context of the connectionist network previously described. If neuroscience is correct in the way that it tells us neural networks are set up and function in a physical sense, then DST is absolutely the better tool to understand and describe cognition.

The issue that dynamicism takes with computational accounts is that in order for them to function there must be an explicit construction of the relevant components of the environment internal to the system. The larger the environment the larger the construction and the more in the way it gets. If we simply do away with them and use DST to show that the system behaves the way it does, not in virtue of how it represents and computes information about its environment but, in virtue of the way in which it is constructed and coupled both internally and to its environment. (Brooks 1991)

So in terms of the explanatory power to weight ratio between what the theory can predict and how bulky the explanation is, DST seems to win on this one.

Connectionist Networks according to van Gelder are neither representational nor computational. He denies the concept of analogue computation altogether. To van Gelder a connectionist network is nothing more than a dynamical system. Which is to say that the best way to describe it is using DST. When connectionists model their networks they use equations to show how the neurons interact. Those equations are all that is needed to understand what is happening. By employing the tools of DST we are able to model the behaviour of the system overall. Given the accounts of dynamical systems and of connectionist networks it should be reasonably clear that a connectionist network literally is a dynamical system. The neurons are obviously very closely coupled in that the firing rate of one neuron as a function of connection weight determines the firing rate of the next and so on. The hidden layers are coupled to the environment via the input layer and the output layer. (van Gelder 1995)

But connectionists disagree and they do so with a quite persuasive argument. I will call this argument the ‘lack of stimulus’ argument. It distinguishes between two different kinds of cognitive task and labels one of those ‘representation hungry’ (Clark and Toribio 1994). The distinction is along the lines of on-line versus off-line behaviours. On-line behaviour is simply those behaviours where a dynamicist could easily say that the agent is coupled with the environment in such a way as to be constantly receiving stimuli in order to generate relevant behaviours. This scenario is best illustrated by Shapiro in an example involving describing the physical characteristics of an apple. While you are holding the apple you have all the stimuli you need to fully describe its externally observable qualities such as colour, shape, weight, etc. So what happens if you want to describe an apple while you are off-line? By which I mean what happens if you aren’t receiving any stimuli from the environment in relation to the apple. Shapiro presents another scenario here where the apple has been dropped on the ground behind you. The system has become decoupled. There is no stimulus that you are receiving that can be said to be from the apple. Yet it is clear that we can still describe the apple from, for example, our memories of apples in general, and those of that specific apple. This implies that we are in some way capable of representing general concepts like apples, and of specific objects like this apple. It is this, Shapiro says, that tells us that at the very least when there is a lack of stimulus our minds must be representing. (Shapiro 2011)

The final piece in the puzzle here is whether the Watt governor is representational. The argument comes down to whether the angle of the arms of the spindle has the function of standing in for the speed of the fly wheel. Based on van Gelder’s account of the operation of the system I think that to say it is to stand in is the wrong kind of abstraction. There is a sense in which you can describe the system in those terms. There is a sense in which you can use the information supposedly represented there to make useful predictions about the behaviour of the system. But it is equally useful to apply DST and make the same behavioural predictions sans representation. The obvious function of the arms is to move the lever which actuates the valve. The dynamical account tells us that its function is such in virtue of the construction of the system. But it does not tell us exactly which physical features of the system enable this function. It tells us what, but not how. The variables that constitute the dynamic equation need to be mapped on to some component of the system in order to achieve a full dynamical account. The representational account tells us it is so in virtue of represented information. That representation and the transformations it undergoes provide us the underlying mechanistic account.

What but not How refers to the way in which DST merely describes the system it doesn’t explain the system. This conception leads to the claim that dynamicism is a return to behaviourism. In other words, dynamicism only cares that input is received and behaviour is output. It tells us nothing of what goes on in between. By abstracting the inner workings of the system to mere quantifiable variables we lose a lot of the explanatory power of connectionism. This is what Bechtel (1998) speaks of when he says that DST provides a covering law not a mechanistic explanation.

Conclusion

DST is a powerful modelling tool that has earned its rightful place in the arsenal of the cognitive scientist. It is very successful at providing clear explanations of behavioural trajectories, likely behavioural patterns, and an understanding of how stimuli play an important role in cognition. However, given the lack of adequate response to the representation hungry ‘lack of stimulus’ argument and that it ultimately does not explain how the system itself functions, I think it is unlikely that DST shall reign supreme.

 

Bibliography

Brooks, R. A. (1991). “Intelligence without representation”, Artificial Intelligence, 47, 139-159

Clark, A. and Toribio, J. (1994). “Doing without representing”, Synthese, 101, 401-431

van Gelder, T. (1995). “What might cognition be, if not computation?” Journal of Philosophy, 92, 345-381

Clark, A. (1997). “The dynamical challenge”, Cognitive Science, 21, 461-481

Bechtel, W. (1998). “Representations and cognitive explanations: Assessing the dynamicist’s challenge in cognitive science”, Cognitive Science, 22, 295-318

Chemero, A. (2001). “Dynamical explanation and mental representations”, Trends in Cognitive Sciences, 5, 141-2

O’Brien, G. and Opie, J. (2006). “How do connectionist networks compute?”, Cognitive Processing, 7(1), 30-41

Bermudez, J.L. (2014). Cognitive Science: An Introduction to the Science of the Mind, Chp. 13

Shapiro, L. (2011). Embodied Cognition, Chp. 5

Rescorla, M. “The Computational Theory of Mind”, The Stanford Encyclopedia of Philosophy, (Winter 2015 Edition), Zalta, E. N. (Ed), URL = <http://plato.stanford.edu/archives/win2015/entries/computational-mind/&gt;

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s