How we use chaos theory to predict the future (and why we still get it wrong)

How we use chaos theory to predict the future (and why we still get it wrong)

Do you ever shake your head and curse the weatherperson as the weather does the opposite of what they predicted again? You planned your vacation for this weekend because it was supposed to be sunny and warm, but instead it poured rain the whole time.

Why does that keep happening? Why do they keep getting it wrong?

That’s chaos at work.

It’s not the fault of the weatherperson.

The true wonder is not how often they predict the weather wrong, but how often they get it right.

The weather is a chaotic system, and chaotic systems are unpredictable. But thanks to a fascinating thing called chaos theory, we’ve figured out how to predict them anyway.

Don’t worry, I’ll explain.

THE BUTTERFLY EFFECT

You may have heard of the ‘butterfly effect’. Yes, it’s a movie, but the concept existed long before the movie . The basic idea is that a butterfly flapping its wings in Brazil can cause a tornado in Texas.

Sound ridiculous? Well, it is… kind of.

Of course a tornado cannot be caused only by a butterfly flapping its wings. That would violate Newton’s third law (the law of action-reaction). A tornado develops due to a very complex series of events. But the idea is that if a particular butterfly had not flapped its wings at a particular time and place, the tornado would not have happened.

So a single butterfly can’t really cause a tornado, but that tiny bit of wind generated by the flapping of its wings is an important part of the process. What chaos theory is really telling us is that chaotic systems are extremely complex, and so even the tiniest miscalculation (e.g. not taking into account the single butterfly) can throw our predictions way off.

And since it’s obviously impossible to take into account every single butterfly when we create a model, we cannot expect to accurately predict a tornado!

That’s why the weatherperson is often wrong.

But then what’s the point of chaos theory if it just tells us what we can’t do?

I’m glad you asked. That’s the cool thing about chaos theory. It offers a solution to the impossible!

Let’s say there’s a big storm brewing and we want to know what it will do. If we build a computer model of the storm and run it several times, we will get a different result each time. It’s not possible to know which result is accurate, if any.

But if we run the model many times, certain outcomes will appear more than others.

Can you already see where I’m going with this?

CONVERGENCE

If we run the model enough times, we will see similar outcomes repeating. We still won’t know what the storm is going to do, but we will at least know that some outcomes are more likely than others. This is called probability. And if we keep updating the model with new data from the storm as it moves and changes, we can keep rerunning the model and we will start to see a convergence on certain outcome(s).

So the more the storm moves, the more we know about how and where it will move. The closer a hurricane gets to Florida, the more likely that it will actually hit Florida. That part is obvious, but we also start to get a better idea of where in Florida it will hit and how hard.

Chaos theory tells us that when it comes to chaotic systems, it is never possible to be certain of our predictions. But the closer we get to a particular event, the more chance we have of predicting it accurately.

That’s why short term weather reports tend to be much more accurate than longer term reports.

Chaos theory also tells us that within chaotic systems, there are underlying principles which can be understood, and that if we could somehow know all the variables we would be able to make accurate predictions. The difficulty is in knowing all the variables.

To better understand what this means, let’s start as basic as possible and work our way up.

THE BASICS

We are talking about something called a chaotic system. What is a system? The simplest definition of a system is ‘a collection of things interacting and working together to form a more complex thing’.

system

If we want to make predictions about a system, we need at least two more terms: initial conditions and final conditions. Initial conditions refers to how the parts of the system (the ‘things’ that make up the more complex thing) are organized at the start of the time period we want to look at. Final conditions, as you might deduce, refers to how these parts are organized at the end of the time period we want to look at.

So the final conditions are what we want to determine, or ‘predict’, and the initial conditions are what we need to know in order to make those predictions.

We are going to take these initial conditions and feed them into a model, which we can define as a mathematical representation of a system. It consists of 3 things: an input (initial conditions), a set of equations which simulate the functioning of the system, and an output (final conditions).

For our purposes we can now define two kinds of systems: chaotic and non-chaotic. In a non-chaotic system, if we put a set of initial conditions into the model and run it, then change the initial conditions very slightly and run it again, our output will be pretty much the same. But if we do the same thing with a chaotic system, the output will be very, very different. This is what we call ‘sensitivity to initial conditions’.

Many non-chaotic systems are ‘solvable’.  This means that we can determine the ‘trajectory’, a line or curve describing the path the system will follow. We can also determine something called an ‘attractor’, which is a point or set of points or shape describing the equilibrium, or stable, state of the system. For example, a swinging pendulum subjected to friction will eventually end up hanging motionless in the center of its arc. The pendulum will keep changing its motion (reducing its swing height) until it stops in the center. This state of hanging motionless in the center is the stable state of the system. The center of the arc is the ‘attractor’ of the system. This is the simplest attractor, a single point.

pendulum

A swinging pendulum not subjected to friction will continue to swing on the same arc forever. It’s attractor cannot be a single point because the motion never stops. The entire arc represents the stable state of the system. Since the motion remains the same forever, the attractor is a closed loop called a limit cycle. It is called this because all the values of the attractor constantly repeat in a cycle. Even if the initial conditions of the pendulum are not within this loop, the pendulum will end up there after some time. We use the word ‘attractor’ because the system (in this case a pendulum) seems to be attracted to this state.

Now here’s where things get weird and complicated. In a chaotic system, the attractor is an infinitely complex geometric shape known as a fractal, which in this context is (appropriately) called a ‘strange attractor’.

WHAT IS A FRACTAL?

A fractal is an incredibly strange product of mathematical wizardry. It can be generated by a simple equation, but its existence doesn’t make any sense.

Imagine, if you will, an equilateral triangle (each side has the same length and each internal angle is identical). Now within this triangle, create 3 lines connecting the midpoints of each side. You should now have 3 smaller equilateral triangles, one in each corner. Now in each of these 3 new triangles, follow the same procedure, creating another 3 identical triangles within each of them. And then do it again inside of those triangles.

And again. And again. And again. Forever.

This repeating process is called iteration, and is the process by which all fractals are created.

Let’s try another one, this time with an illustration to help us out.

Start with a straight line. Cut out the middle third of the line and replace with two lines of the same length as the one you cut out, which meet in the middle to form a peak (in fact an equilateral triangle with a missing side). Now do it again on each of those four lines. And again on each of those 16 lines.

And again. And again. And again. Forever.

Now matter how much you zoom in to a fractal, you will find the same pattern in a smaller version. There are an infinite number of smaller versions. This is, of course, impossible, because an infinite number means the perimeter of the shape has an infinite length and therefore would have to occupy an infinite area. But it doesn’t. The area is finite.

The very existence of fractals is a paradox.

The thing is that fractals are not two-dimensional objects. The name fractal comes from the fact that they have fractional dimensions.

A point is a zero-dimensional object. A line is a one-dimensional object. A triangle is a two-dimensional object. A pyramid is a three-dimensional object. The fractals I just described are simulated in 2-dimensional space and have more than one but less than two dimensions. Yes, really. You can also simulate a fractal in 3-dimensional space, and it will have more than two but less than three dimensions.

We can create all kinds of different fractals, and they all have the same basic properties. They repeat the same shape infinitely in smaller and smaller size. And they have fractional dimensions.

If you try to picture these things in your head, you will fail because they make no sense and shouldn’t exist. How can you picture something which goes on forever and doesn’t even have the decency to have a whole number of dimensions?

Anyway, now that your mind is broken a little bit (if it’s not, go back and stare at the animated fractal again for a while and try to make sense of it), let’s return to our topic. So, chaotic systems have these ‘strange attractors’ in the form of fractals, which means that the system is headed for conditions that fall, not on a point, nor on a closed loop, but somewhere along this impossible, infinitely repeating shape.

Once a system ends up on its attractor, it will stay on the attractor. In the case of the swinging pendulum subjected to friction, it will eventually stop at the center point of its arc and stay there forever. The swinging pendulum not subjected to friction will continue to swing on its arc forever. And a chaotic system, when it reaches its strange attractor, will stay on that fractal.

And since fractals, however impossible their existence might be, clearly have an ordered structure, chaotic systems with fractal attractors must have an underlying order.

Chaos theory lets us define equations to create attractors that approximate the ordered patterns underlying chaotic systems. This in turn helps us to make predictions based on initial conditions.

However, we still run into a problem. Having an idea of the underlying pattern doesn’t change the fact that a slight error in initial conditions will cause a rapidly increasing error in later conditions so as to end up somewhere else along the infinitely complex shape of the fractal.

Given the impossibility of obtaining exact initial conditions, it seems hopeless to imagine that we could ever make accurate predictions of chaotic systems over more than a short time period.

But before we give up let’s revise our understanding of chaos theory a little bit.

BUTTERFLIES DO NOT CAUSE TORNADOS

First of all, the idea of the butterfly effect came from Edward Lorenz in a 1972 talk at the American Association for the Advancement of the Sciences. He posed the idea of the butterfly causing a tornado as a question (can it happen?) to illustrate the idea that the atmosphere behaves in an unstable way. He did not claim that a butterfly could actually contribute to the formation of a tornado.

In fact, some scientists now believe that the flap of a butterfly’s wings will not have an effect on the generation of a tornado. It’s just not powerful enough.

The success of a computer model in predicting the weather depends on the accuracy of the initial conditions that are put into the model. If they are even slightly wrong, this can have a large effect on the accuracy of the predictions. But what is the definition of slightly?

As the butterfly effect suggests, slightly could mean something really, really, really small. But now it’s thought that it could mean something slightly less small. Like the position of a cloud, or a fraction of a degree error in temperature measurement. These are still tiny things that are very difficult to take into account. But maybe not impossible.

You see, fractals have infinite complexity, and so to get our predictions right on a long time scale, we would need to define our initial conditions with infinite accuracy, and that is obviously impossible. But, at least by our current understanding of physics, fractals cannot actually exist in nature, because nature lacks infinite complexity. We certainly have many things in nature that resemble fractals, such as seashells and blood vessels and coastlines and Romanesco.

Romanesco broccoli

But none of them are actually fractals because at some level the structure stops repeating.

So if the structure underlying chaotic systems is not truly fractal, then we shouldn’t need infinite accuracy in our initial conditions. Perhaps, if we manage to describe the system with the right equations, some very high level of accuracy is enough.

CHAOTIC SYSTEMS ARE NOT COMPLETELY CHAOTIC

Second, chaos does not negate our ability to predict non-chaotic aspects of a system. A chaotic system is one in which one tiny change in initial conditions will have a massive effect later on. A non-chaotic system is one in which one tiny change in initial conditions will have little to no effect later on.  But not every aspect of a chaotic system is chaotic.

If a stone is rolling down a hill, and the hill is not perfectly smooth and the stone is not perfectly round, then chaos theory tells us we will not be able to predict where it will end up when it reaches the bottom. But we can predict other things. We know that the stone will roll down the hill instead of up, because gravity is not chaotic. We know that the stone will gain momentum, and that it will bounce and roll in certain ways according to the laws of physics, because the laws of physics are not chaotic.

We cannot predict exactly where the stone will end up, but we can predict that, unless there are any structures on the hill that could (predictably) cause the stone to stop, it will end up at the bottom of the hill.

So in fact chaos only reduces our ability to make accurate predictions, or rather reduces the aspects of a system that we can make accurate predictions of.

Now you understand chaos theory better than most people (which is to say, not very well)!

Chaos theory has relevance only to chaotic systems, but that entails a pretty long list. It has been applied, with varied success, to many, many things, from the movement of traffic to the stock market to social dynamics to the function of the human heart and even disorders of the brain.

Major Sources:

Chaos: A Very Short Introduction (2007) – Lenny Smith

Chaos Theory Tamed (1997) – Garnett P. Williams

Explaining Chaos (1998) – Peter Smith

Chaos: Making a New Science (1987) – James Gleick

*Note: I am providing these links in case you are interested in further reading about chaos theory. I am not endorsing or recommending these books, and I will not get any money if you buy them.


Want more articles like this?

Sign up to my mailing list to get every update.


Bonus: Exclusive access to the first 3 chapters of Entering Darkness!



1 thought on “How we use chaos theory to predict the future (and why we still get it wrong)

  1. SeyedMojtaba on said:

    Thank you so much. It was a very interesting and non-ambiguous article.

Comments are closed.