**SUPPOSE YOU ARE** at a dinner party in a fancy French restaurant. During a lull in the conversation, the person on your right – who is a friend of a friend – turns to you and asks: “what do you do for a living?”

Now, suppose you belong to the first generation of scientists who have studied for PhDs in quantum information. This new field combines aspects of computer science, maths and physics, and you find it absolutely fascinating. But launching into an explanation of how all these things come together is not exactly fodder for charming dinner party repartee. The last time you answered that question honestly, the other guests politely endured a five-minute lecture. You can do better this time. So you offer a short, to-the-point answer: “I work in theoretical physics.”

“Really! But what do you *do*, exactly?” is the reply. Experience has taught you that the most effective answer involves travelling to conferences in exotic locations. But on this occasion, your subconscious rebels. You find that your brain is filling with concepts such as quantum cellular automata and models of computation, concepts at the core of your work. They are what get you out of bed in the morning. So you blurt out something like “models of quantum computation and the consequences for theoretical physics”. From the look on your companion’s face, you know you’ve messed up again.

**QUANTUM COMPUTATION** is a new, booming field that exploits the magic of quantum theory to the benefit of computing – from faster computer speeds to more secure data transfer. Discussing its importance for theoretical physics at large may not be a subject that fits neatly into a casual dinner-party conversation, but it is an idea that is increasingly having its day. Some physicists argue that physics should shift away from the study of matter, particles and forces, and instead focus on information. The concept of information is already central to physics, via notions such as entropy (fundamental in thermodynamics, entropy is the study of how energy moves in and out of a system), observers and measurements (central to Einstein’s theories of relativity and to quantum mechanics), and information exchange between systems.

But there is a growing opinion that, as these ideas are embraced, physics will become not only informational, but computational. This idea dates back to the 1970s, and states that the entire universe can be considered to be a giant computer. In this universe-computer, in all places and at all times, particles are treated as patterns of information moving across a vast grid of microprocessors, rather than material bodies colliding and scattering – much as a tennis ball can be thought of as a pattern of pixels moving across your TV screen, rather than a lump of rubber ricocheting off a grassy surface during the Wimbledon final. Digital physicists, for their part, are like characters in a video game who are desperately trying to understand the rules.

A striking result to come out of this 1970s work was Robin Gandy’s argument that the universe could be simulated by a computer with unlimited memory. Gandy was a British mathematician, logician and student of the brilliant Alan Turing, whose work on coding laid the foundation for the invention of the modern computer.

Gandy began his argument by noting that all physicists agreed on a few self-evident principles. One is that the laws of physics remain the same everywhere and at all times: if they didn’t, they wouldn’t deserve to be called laws. The second is that there are causes and there are effects; all events must have their causes in the past, and the causal influence can travel at most at the speed of light. That means information, too, can travel from one system to another no faster than the speed of light.

Finally, and somewhat controversially, Gandy stated that it is reasonable for physicists to believe any finite volume of space can only contain a finite amount of information.

**YOU MIGHT BE** beginning to see why dinner conversations fail to flourish with the gentle banter of theoretical physicists. But, stay with us here, for now we are getting close to the crux of the matter.

From Gandy’s third principle of finite-density information, it follows that if space were, hypothetically, divided into cubes, each cube could be described by the finite information it contained.

Moreover, Gandy’s second principle says that the state of each cube at a particular point in time, call it t+1, is determined by the state of the neighbouring cubes at time t. In other words, the state of a cube at time t+1 is obtained by applying what information theorists call a ‘local rule’, to the state of its neighbouring cubes at time t. Finally, it follows from Gandy’s first principle that this local rule is the same everywhere and at all times. So, the state of the entire universe at time t+1 can be computed by applying some fixed local rule everywhere in space.

The effect of this argument is to reduce the universe to a type of computer called a cellular automaton. You may have played with a simple cellular automaton before, in the form of the popular computer-based ‘Game of Life’, developed by British mathematician John Conway.

The Game of Life comprises a 2-D grid of cells in which each cell can be either ‘alive’ or ‘dead’. Once you have decided which cells will be alive initially, the state of any given cell at a later time will be determined by that cell’s previous state plus the states of its eight immediate neighbours, according to rules that simulate the effects of underpopulation, overcrowding and reproduction (see ‘Conway’s Game of Life’).

These rules are simple, yet it has been shown that the game is universal, meaning that it can be made to compute any known classical algorithm or set of instructions – in much the same way that simple logic gates and wires of a standard desktop PC do.

But is there any chance that the real universe we see and experience could be reduced to such a simple game?

**THE PROBLEM WITH** Gandy’s model – and the reason why the original digital physics project was doomed to failure – boils down to one thing: quantum physics. To understand why, let us return to our dinner party at the French restaurant, where the food is getting cold.

Against your better judgement, you launch into an explanation of quantum theory using the knives and forks on the table. You hear yourself saying: “Pick a system that can be one of two things – such as an item of cutlery, which can be either a knife or a fork.” You hold up your knife. “Well, in quantum theory, this piece of cutlery does not have to be one or the other. It can be both at the same time, a state known as superposition.” As you start to explain, you sense that your audience may not be grasping the full implications. You ponder the wisdom of an alternative explanation involving salt and pepper shakers, but before you can begin, your waiter arrives with the dessert menu.

The reason we don’t encounter superpositions of knives and forks on a daily basis is that as soon as you observe a quantum system, or take a measurement, it becomes ‘classical’ again. In classical physics, when you observe something you can have only a limited set of results: up or down, knife or fork, alive or dead. Likewise, in classical computing, the smallest unit of information, known as a bit, can take on one of two values, 0 or 1. Quantum physics throws the rulebook out and allows a superposition of states, a concept described by the famous Schrödinger’s cat thought experiment (see ‘The cat paradox’).

So, our smallest unit of quantum information, called a qubit (quantum bit), can only store a single bit of classical information, a 0 or a 1. In that sense, Gandy’s principle of finite information density remains compatible with quantum theory: we cannot effectively store more than a bit of information within a qubit. However, quantum physics says that before one observes a qubit, it is allowed to be in a superposition of states. Hence, quantum physics no longer allows for the case where each cube of space can be fully described by the finite information stored in it, and this is where Gandy’s argument falls down.

The idea of modelling the universe as a computer was resurrected in some form in the early 1980s by American theoretical physicist Richard Feynman. His idea was born out of frustration at seeing classical computers take weeks to simulate quantum physics experiments that happen faster than a blink of an eye. Intuitively, he felt that the job of simulating quantum systems could be done better by a computer that was itself a quantum system.

Like their classical counterparts, quantum computers consist of circuits. To construct quantum circuitry you need quantum wires, which are analogues of real wires carrying conventional bits (as voltages), except that they carry qubits. Classical computers are made of logic gates, which take in information (bits) and output new information (new bits). Quantum gates process qubits instead.

**OVER THE PAST** decade or so, experimentalists in many groups around the world have successfully implemented quantum wires and simple qubit gates. The true difficulties lie with precision of more complex qubit gates and with protecting many wires from the environment – remember, if the environment ‘observes’ the quantum wires, they become classical again.

One of us (Pablo Arrighi), along with colleagues, developed a version of Gandy’s hypothesis that accounts for the complexities of quantum mechanics. Instead of the third principle, which says that a finite volume of space contains a finite amount of information, we state that a finite volume of space can only hold a finite number of qubits.

Considering the implications of the three updated principles, we were again able to reduce the universe to a computer, a quantum version of the cellular automaton discussed earlier. A quantum cellular automaton is very much like a classical cellular automaton, except that now the cells of the grid contain qubits (see ‘How to explain the universe’). The evolution from time t to t + 1 involves applying a quantum gate operation to neighbourhoods of cells repeatedly, across space. But, there are, alas, some subtleties to quantum cellular automata that cannot be explained quite so easily in a picture. For example, the cells can now be in a superposition of states, and they can also be ‘spookily’ entangled with any other cell – the state of a cell doesn’t solely depend on those next to it.

There is, of course, a big gap between constructing a ‘toy-model’ quantum cellular automaton and applying the lessons learned from it to the real universe. But if the updated versions of Gandy’s hypotheses hold true, and we can indeed describe the universe as a gigantic quantum cellular automaton, then studying physics becomes a game of attempting to deduce the ‘program’ of the vast quantum computer that we live in.

The conventional approach to deducing the universe’s ‘program’ is, of course, not to use cellular automata or anything like them, but to probe the ‘rules of the game’ with increasingly refined physics experiments, such as those performed using the Large Hadron Collider at the CERN particle physics lab. But perhaps there is an alternative computer science-orientated method, one that attempts to find the rules deductively.

SO, WE COME BACK to what quantum information theorists in physics really do for a living. We can begin this deductive process by discarding rules that are too simple, on the grounds that we live in a complex universe.

Next, we note that all sufficiently complex rules can be made to simulate each other. In other words, if the rule of a particular quantum cellular automaton is complex enough, then it can simulate all other quantum cellular automata, even when the other automata have rules that are horrendously complicated. Analogous to the universal nature of Conway’s Game of Life, a quantum cellular automaton that can perform such a simulation is said to have intrinsic universality. If we can find the simplest, intrinsically universal rule for a quantum cellular automaton, we can use it to find the simplest and most ‘natural’ (closest to what we observe in nature) way of implementing or simulating physical phenomena – like the particles and forces that make up the universe.

Of course, it remains to be seen whether all physical phenomena can be ‘encoded’ using the concepts developed here. Many difficulties lie ahead for those of us who are trying to answer the question of how nature computes itself.

The concepts of quantum cellular automata and intrinsic universality are likely to prove key in finding simple, minimal and universal ‘toy models’ to work with in attempting to answer this question. From a computer-science point of view, reaching this goal will amount to a better understanding of physics.

Yet we are obliged to conclude with a word of caution: these ideas may not be all that helpful in a restaurant conversation. Attempting to explain them may end with the other diners deciding that you are the best person to call the next time their (classical) computer breaks down. But on a more positive note, if we can find the rules, everyone will be a winner in this game of life.

*Pablo Arrighi and Jonathan Grattage are quantum-information scientists affiliated with the University of Grenoble and ENS de Lyon, France.*