*To start reading Chapter 5 of *A New Kind of Science, *go here*

## Two Dimensions and Beyond

One of the central themes of this chapter is that adding more dimensions does not ultimately have much effect on the occurrence of behavior of any significant complexity.

**Cellular Automata** [p. 170]

2D and 3D CAs are shown to behave, with respect to the complexity generated by their evolutions, much in the way of 1D CAs. That is, they don’t appear to generate a higher level, or have a more frequent occurrence of, complexity, in general.

**Turing Machines** [p. 184]

Turing machines are generalized into higher dimensions by allowing the head of the Turing machine to move around on the higher-dimensional grid rather than just going back and forth on a one-dimensional tape.

For Turing machines, we don’t see any more (of frequent) complexity than in one dimension.

**Substitution Systems and Fractals** [p. 187]

By putting substitution systems on to a 2D grid, we start seeing fractal geometries emerging.

In order to get complexity (instead of just simple nested patterns), one needs to devise a way for the replacements for a given element to depend on its neighbors. However, it isn’t immediately obvious how to generalize this to higher dimensions for a sequential substitution system, in which just a single block of elements is replaced at each step (to do so one would have to set up a way to scan the elements in 2D, which would just reduce it to a 1D system).

**Network Systems** [p. 193]

This is a section near and dear to my heart, because my personal research interest in NKS has thus far focused on investigating evolutions of networks with CA rules mapped on to them.

A network topology allows you to break away from the grid. Though my personal work has been focusing on running CAs on static network topologies, this section looks at the complexity of dynamic networks, with a certain number of nodes and connections and rules on how to make the connections.

Therefore, *a network system,* in this section, is a collection of nodes with various connections and a set of rules on how these connections should change from one step to the next.

We’re going to look at how the number of allowable connections per node is related to the complexity in the resulting allowable network configurations. A node is allowed connections to other nodes as well as self-connections. Connections are directionally-dependent.

We start with two connections. We see as the number of nodes increases, the number of possible networks grows rapidly. By laying out the networks in certain ways, we can reproduce grids in various dimensions, so there is nothing inherently one-dimensional about networks.

How does one set up the evolution of a network system? Since the next step will determine how connections come out of the node, we need to know how those connections should be rerouted based on the local structure of the network around that node. In other words, we survey the surroundings, and based on what we see, with form particular connections.

One can imagine that without animation of some sort available, it’s tricky to visualize this process. And in the book, there’s a rather confusing convention for doing this which I think in the day and age of **Manipulate** can be disposed of, unless one is really fond of comparing static pictures side-by-side.

As long as nearest-neighboring nodes are the only ones being looked at, we see simple, repetitive structures. However, as soon as you start going just a bit further out you get much more complicated behavior.

As we’ll see later, understanding these network systems may be tantamount to understanding the basic structure of space and our universe itself!

**Multiway systems** [p. 204]

Wolfram notes in the beginning of this section that all the systems that have so far been set up in *A New Kind of Science* have simple structures with respect to time, all set up to evolve from one state to the next.

Multiway systems are set up to have a whole collection of possible states at any given step, and all distinct sequences that result are kept. Depending on the structure of the possible states, the fluctuations in growth can look random, though we usually see two ways they grow: die out, or grow exponentially quickly. Slow growth is rare, and some amount of repetition seems to always take over in the end.

Since, by the nature of a multiway system, whenever any particular sequence occurs, it must always lead to exactly the same behavior, one can represent entire systems with a short sequence, indicating with loops the repetition of the sequence. One can also rewrite them in other ways, so that multiways systems in effect build up their own patterns of connections in time, the patterns which can be quite complicated.

**Systems Based On Constraints** [p. 210]

This section discusses systems that, rather than being set up with rules of step-by-step evolution, are instead just given constraints to satisfy.

It’s discovered that in one-dimensional systems, there are no simple sets of constraints that can force complex patterns. And in fact, in two dimensions, one of 171 patterns can satisfy any set of two-color, four nearest-neighbor, constraints.

With more complex neighborhoods, one can force complexity, but it’s very, very rare.