Complex or Complicated?

Ultimately, mathematics isn’t meant to be the science of step-by-step procedures but the applications of critical thinking and learning how to reason and persevere in problem-solving. See, there’s a difference between complicated and complex. Flying to the moon is complicated, but it’s really a step-by-step process that many people could eventually learn with enough training. Raising kids is complex; there’s no how-to guide that fully encompasses raising kids, and you can’t just follow a step-by-step training to learn it either. It’s an all-encompassing, deep process that requires different aspects of your whole being to work together in completing the task. Too many people think of math as complicated – and some parts of it are. But the best parts of math are complex – the intricate critical thinking, analysis, ingenuity, creativity, and inspiration that drives real-life resolutions to issues. This is the type of thinking I want you to learn in college-level mathematics, not just rote problem-solving. At its heart, the best math is complex.

This was written by Stuart Jones, who teaches math at Sterling College in Kansas. I think this idea is applicable to almost all areas of science and life, and naturally also to martial arts.

Cartoon by Hugh McLeod of Gapingvoid.

When you watch an advanced martial artist perform a technique, it might seem complicated. When I was just starting aikido, I was sure I would never remember all this precise sequences of movements, the correct footwork, the exact position of the arms, let alone doing this fast enough. After a few years of training muscle memory took over and the movements became almost automatic. I mastered the complicated part of aikido – but I still didn’t get the complex part.

I struggled when I started going to more advanced classes. I saw our sensei make hardly any movement at all, but his opponent still ended up falling. This seemed like a miracle – or like cheating. I had no idea what was going on until he and several other people patiently explained to me the skillful interplay of forces and balance behind this miracle. Complex and subtle changes in the body in its relation to the opponent produced outwardly very simple but quite powerful moves.

We can see similar dynamics in many areas of math. We’ve already talked about complex numbers. We can say they’re a bit more complicated than real numbers – basically we’re talking about pairs of two numbers instead of just one number, with their own arithmetic operations, function definitions and the like. However, we have already seen that the theory of functions of complex variables is not only deep and complex – it actually makes many things simpler and more intuitive than with functions of real variables. This is the complexity we’re looking for – one that allows us to uncover deeper truths.

The Logistic Map

For another example, let’s look at the logistic map. It represents a very simple population growth model. In this model, a given population grows proportionally to its current size due to reproduction, but it can’t grow too big because of limited resources, so there is some limit on the total numbers, and the rate of growth decreases when the population size approaches this limit. Let’s call \(x_1\) the ratio of current population size and the maximum size (so \(x_1\) will be between 0 and 1). Let \(x_2\) denote the same ratio in the next generation. Then \[x_2 = rx_1(1-x_1)\] where \(r\) is some parameter between 0 and 4 – you can look at it as a characteristic of the environment. This equation captures both effects – \(x_2\) grows when \(x_1\) grows, but as \(x_1\) gets closer to 1 (population gets close to maximum possible size) this growth gets slower and slower. Applying the same equation to \(x_2\) to get \(x_3\) for the generation after, and so on, we get the logistic map.

This is really a very simple sequence which can be calculated with just a few arithmetic operations, but turns out it can have wildly different behaviors after a lot of generations depending on the parameter \(r\).

When \(r\) is smaller than 1, there is no growth at all and population eventually dies out (the sequence converges to zero). When \(r\) is between 1 and 3, the population size eventually stabilizes (doesn’t change at all between generations). But more interesting stuff happens when \(r\) is bigger than 3.

For \(r\) between 3 and approximately 3.45 the sequence jumps back and forth between two different values. For example, when \(r=3.2\), the population ratio oscillates between two values, one around 0.5 and another around 0.8. Between approximately 3.45 and approximately 3.54 two values suddenly become four, that is, the sequence terms will take four different values and then return to the first of them and repeat. Notice that the interval between 3.54 and 3.45 is smaller than the previous one, between 3 and 3.45. On the next interval for \(r\), which is even smaller, the logistic map oscillates between 8 different values, then 16, 32 and so on.

Are you confused yet? It’s hard to understand intuitively what’s going on because we’re talking not only about the eventual behavior of the logistic map sequence, but about many such sequences at once, with different values of the parameter r. Let’s look at two types of graphs that reflect this.

This one is from a great post of USC professor Geoff Boeing. It shows several possible behaviors of the population for different growth rates. You can see how some curves converge to a fixed value, and the gray one just bounces around between four different values.

Morn/Wikimedia Commons

The second graph acquired fame under the name “bifurcation diagram”. It shows just the set of eventual values, or an attractor, for many possible values of \(r\). From each value of \(r\) on the x-axis, if you draw the vertical line, it will tell you the eventual behavior of the population. So the “stem” of this picture represents one eventual value, which then splits into two (bifurcates), then four, and so on. We can’t really see a lot of these bifurcations on the graph because they start happening very close to each other. However, when \(r\) is approximately 3.56995, the bifurcations stop, and the population descends into chaos. This doesn’t mean anarchy in the herd – it means the population numbers jump between an infinite variety of different values in a seemingly random way. However, it’s not at all random, and in fact the set of possible values has a fractal structure, that is, its structure repeats itself on different scales. An attractor with a fractal structure is aptly named a strange attractor.

I’m trying hard to not go on about this amazing system forever, so read Geoff Boeing’s post if you want to understand the logistic map in more details. It shows how one very simple equation can lead to things of amazing depth and complexity. This is part of the beauty of math.

Theory of Complexity

We have just seen a complex system, one that exhibits a nontrivial behavior or structure with many interesting properties. What about a complex problem? When you try for a long time and still cannot solve a problem, does this mean that it’s a complex problem or that you just have to study a bit more? Clearly, the answer would be different for a high school student and a professional mathematician, but is there such thing as objective complexity of a problem? Can we put a number on it?

It turns out we can, at least theoretically. The invention of computers and the need to make complex calculations in reasonable time gave birth to the theory of complexity, a field on the border of computer science and mathematics that studies the amount of resources, such as computer time and memory, necessary to solve various types of problems.

As an example let’s say we have a long list of numbers, and we want to find the largest of them. The easiest way would be to go down the list, comparing two numbers, keeping the largest and comparing it with the next one, and so on. The number of necessary operations in this algorithm would be roughly proportional to the length of the list. We write in this case that the complexity of this algorithm is \(O(n)\), meaning the time needed grows linearly with the length \(n\) of the list. If there’s another algorithm with complexity \(O(n^2)\), the time grows proportionally to the square of the list length, so it takes significantly more time. However, it’s still much better than an algorithm that takes exponential time, for example \(O(2^n)\), because an exponent grows much faster than any polynomial. So if we can solve a problem in polynomial time, we still consider it fairly fast. The class of all problems that can be solved in polynomial time (meaning there is an algorithm of complexity \(O(n^k)\)) is called “class P”.

Now suppose we are just told that the first number on this list is actually the largest, and we want to check this. For this we’ll need to compare this first number with all the rest of them. This also gives us \(O(n)\), so this hint wasn’t a lot of help after all. However, there are many problems that are quite hard to solve, but if we know a potential solution, it’s easy to verify whether it is, in fact, a solution. For example, any mathematical proof is much easier to check than to come up with it in the first place. For an example related to computer science let’s look at factorization – finding all factors of a large number \(N\). This is an important problem in cryptography. Obviously it’s easy to check if a given product of several factors does give us the original number, but much harder to find these numbers in an efficient way. Clearly we can do it by checking sequentially every number smaller than \(N\) (\(\sqrt{n}\), in fact) to see if it’s a factor, but this will take enormous time for big N and isn’t very practical. Currently we don’t know any reasonably efficient algorithm to do this.

The class of all problems whose potential solutions can be verified in polynomial time is called class NP. Clearly, if we can actually find a solution, we’ve already verified it, so every problem in P also lies in NP. It seems intuitively obvious that there are problems in NP which are not in P. However, nobody so far has been able to prove this definitively. It’s usually referred to as “P != NP problem”. This is one of the great unsolved problems in mathematics and theoretical computer science, and there’s a million dollar prize waiting for anybody who can solve it one way or another. Who wants to take a shot?

While you might not become a mathematician and solve famous problems, I hope you develop an appreciation for complexity and its intricate relation to simplicity – in math, in sports, and in the society in general, which is the most complex and amazing system known to us. Let’s strive to make our lives complex and interesting, but not overcomplicated.

Learn More

Complexity – a great essay by Joseph Malkevitch


Posted

in

by

Tags: