You can go on: an even more energetic and advanced civilisation could change the nature of the galaxy in which it lives, and you could imagine going on almost to what I call Class Omega, a type of civilisation that has the ability to somehow manipulate the nature of the expansion of the universe or something equally grand.

It is more interesting to do the opposite of what Kardashev did I think, that the nature of progress we have seen in recent years is not to do as Kardashev suggested, to manipulate bigger and bigger things.

## ecejyredagij.ml | Impossibility, John D. Barrow | | Boeken

It is completely the opposite: it is to manipulate smaller and smaller things. So we can think of a measure of progress which looks at the nature of small scale manipulators. Our predecessors, running around in the African Savannah, could manipulate things about the size of their own bodies. They could fell trees, they could move rocks, they could build shelters. Before you have sophisticated tools, the scale of things that you can move around are limited to things of about your own weight or the weight of several of your friends and yourself, and that requires a certain characteristic amount of energy.

As you move on to much more modern times, you can see we have gone through stages of being able to move very large amounts of material, to make buildings, to make bridges and so on but, more recently, we are seeing the course of going to smaller and smaller scales: the ability to carry out surgery, such as transplant surgery, on organs in our own bodies; and then, to go smaller still, to start to study and manipulate genes as part of our genetic makeup. In the world of the physicist, we can go smaller still.

We can manipulate and start to do engineering on the scale of molecules and of single atoms. This so-called nano-technological revolution which we are just entering really shows you what progress perhaps will look like in the future, as people try to think of ways in which engineering might move to even smaller particles of nature rather than atoms, perhaps to protons, perhaps to electrons, perhaps to other elementary particles.

We can envisage that progress is miniaturisation and very, very advanced civilisations may well be extraordinarily small. It really rather pays to become small: you generate very little pollution, you use very little energy, you produce very little waste heat. Ultimately, in this direction, you could see going smaller and smaller, your ultimate capability is to try to engineer with the structure of space and time itself. In the last lecture, we saw something of what that might mean in the sense of time travel and exploiting the way in which the flow of time can be changed.

Cognitive limits on our capability are difficult to evaluate, but we can appreciate them in certain ways. We know that we are the outcome of an evolutionary process which has selected for certain attributes and abilities which may no longer be relevant in quite the same way that they were when they were first selected. Half a million years ago, for example, our predecessors might have required certain abilities for their survival and multiplication which are no longer quite so relevant today.

You can think of how it might be that we have an appreciation for certain aesthetic qualities or for music, for example, which are just by-products of abilities that really did have a survival value long ago. So we like symmetry, where we are physicists or a wallpaper designer, but why did our propensity and like for symmetry evolve? One possibility was that what we have at the beginning is really a sensitivity to recognise living things in a crowded field where there are both living and non-living things. Living things can be distinguished from non-living things in a very crude way, by the fact that living things are generally symmetrical from right to left, whereas non-living things are not and they need not be.

That is an example of how our very distant evolutionary history may bias the way we perceive the world and the things that we like so that we tend to do science and acquire knowledge in particular ways. Our propensity for both language and mathematics are unusual. Human linguistic skills are extraordinary. You might meet able-boded people who cannot do mathematics or cannot play a musical instrument, who cannot write, who cannot read, but you will go a very long way before you find someone who cannot speak any type of language, if they are able-bodied.

What we do when we speak is extraordinarily sophisticated, far more complicated and far more difficult than any of those other skills that people claim to find so difficult, like playing the piano or solving differential equations.

## More stuff

So we have certain very complex linguistic skills, and perhaps many of our other ways of analysing the world, doing mathematics and so forth, are piggy-backing on that language structure that we have within us. We might worry that there are certain conceptual barriers that we have. We are used to thinking of the world in very particular ways: cause and effect, that everything has a cause, that the effects of causes just occur locally, and it is very difficult for physicists to think of the world other than in this rather traditional local interaction way.

The last way of looking at the problem here involves simply logic and conceptual barriers, and I want to give you a simple example: that it could be that the way the brain works encounters certain paradoxes which prevent it using logic in quite the way you might imagine. Suppose that we have a situation of voting, and I talk about voting because there is one interesting theory of the way the mind works by Marvin Minsky, which he calls the Society of Mind, where he looks upon the mind rather like a Parliament in a sense, that different parts vote to act and there is a winner and then the action takes place.

In some parts of the space programme, you may be interested to know there is obviously very sophisticated computer control of a launch of something like the Space Shuttle, but there are several computers controlling the launch, and they vote at the end whether to launch or not to launch. You may have a vote by the computers not to launch. Suppose that you have three people, I will call them Alex, Bill and Chris, and they have got to decide something — perhaps they are going to buy something together, it is a car perhaps, and there is an Audi, a BMW and a Cortina to decide between.

They are pooling their money, so they decide they will vote on their preferences. So if we then add up votes, what happens? Not so. If you have these simple preference voting structures, you can produce this paradoxical outcome. So it could be that you had a computational structure, a mental structure, which ran into this type of paradox, and you would have to have a sort of a randomiser which broke the deadlock in some way.

Well, so much for those practical and cognitive limits, I want to talk now about what I think are the most interesting aspects of this issue, and that is what I call intrinsic limits. These are limits to being able to compute, being able to decide the truth or falsity of statements, of being able practically to perform computations, and then the limits that are imposed by quantum uncertainty and causality. I will start with the quantum uncertainty because most people have heard about this, and this is a good example of the pattern I mentioned at the beginning, where you have a very successful theory, that then it turns out is not able to predict something.

It predicts that it cannot predict something. So it does not just fail to do it, because of incompetence or it has not been worked out fully enough, but it predicts quite definitely that something cannot be done. The interesting first example became known as the Uncertainty Principle of Werner Heisenberg. It is always bigger than some finite number. This tells you that you cannot know where something is and how it is moving with perfect accuracy, even if you have perfect instruments.

There is a conventional way of explaining this in a popular way, which is worth giving, and so I will give it. When you start to encounter things that are very, very small, what do you do when you measure where they are located? For example, when I look at this projector here, what I mean by seeing it is that some photons of light have bounced off it and entered my eye, and because the projector is a large object with a large mass, the impact and rebound of the photons into my eye has essentially no discernible effect upon the location or the movement of the projector.

But if the thing I am looking at is made smaller and smaller and smaller, eventually the energy, the wave length of the photons that hit it, start to disrupt it by their rebound into my eye or my detector. So the very act of measuring the location of something moves it, and so it is not where you thought it was after the measuring interaction has taken place. You can think of this inequality as being a measure of that disruption of things by the measuring process. That gives you an idea as to why it is not surprising that there might be a limitation of this sort.

What it is telling you is that the concepts of position and momentum, which are familiar to us in large scale everyday life, the extent to which those properties of what you are observing can co-exist when they are very small is limited by this inequality. So there is a limit to the extent that we can simultaneously talk about the notion of a position and of a momentum of something that is very small that is given by this Uncertainty Principle.

This is a good example of how a theory which does a lot of very successful predicting ultimately predicts its own limitations, the limitations of some of the concepts that it is using when you enter a particular domain. If we now move to the ability to calculate and compute, it is useful to think of, for example, all the truths of mathematics: all the formulae, all the theorems that there could be. Some of them might be ones that you are using in physics or chemistry or biology, and others might be just ones that you like because you are a mathematician.

Here, right in the background, if you like, is the whole of those mathematical truths. You can think of mathematics as all the possible patterns that could exist. The whole world of mathematics is infinite in extent. The mathematics that we know about is finite. In the past, it was limited by our ability to calculate with pencil and paper. Even if you calculated your entire life, like some of these people who want to calculate by hand all the decimal places of pi up to millions and millions fold, there is a finite limit to the amount of that mine of mathematical truth, that infinite mine that you could bring out.

Even if you use the fastest computers, and nowadays in mathematics computers are used to do things much more sophisticated than simply add up columns of numbers is becoming commonplace. There are computer programmes that carry out logical operations, that do algebra, that can do integration, manipulate formulae which have hundreds of thousands of terms in them, which you would always make a mistake in moving around if you tried to do it by hand — you could never be sure it was correct. So there is a realm of what we might call practically computable truths, things that you could discover in that great sea of mathematics by using our fastest computers for very long periods of time.

Beyond that, there is another frontier, another boundary, and this contains what we might call all computable truths. Alan Turing predicted, and then proved rigorously, that there are mathematical statements which cannot be resolved — shown to be either true or false — by running any computer programme for a finite time, so these are called uncomputable problems.

We know many of them. These problems have a complexity that cannot be got at just by doing the same thing over and over again with a little change, more and more rapidly, that at each step, you need a new, novel idea, you have to take a different type of step. The problems are by no means esoteric or unusual: being able to show, for example, as Roger Penrose did, that it is possible to tile an infinite plane with two jigsaw puzzle pieces, of different shapes, but only two pieces, to tile the whole of an infinite plane without periodically repeating the pattern.

This is something that no computer could solve, so this is a problem which is uncomputable in character. So there is something called mathematical truth which is much larger than the realm of decidable truth: things which we are able to prove true or give counter-examples and show they are false. We are going to have a look at some of these little realms of uncertainty in a bit more detail.

The first interesting thing about using computers to do things like mathematics is that they are not a panacea as most people fondly imagine, that computers do not enable you to calculate and do anything that you want to do. They very quickly run into problems of tractability, and the reason is because the world of mathematical problems, scientific problems, can be divided into two parts - the same 2 parts you met when you were at school — the easy problems and the hard problems!

Someone had written in saying he could not understand what all the fuss was about, standards must be falling, because when he was at school, all the problems were un-do-able on the maths examination paper! Easy problems have a rather particular definition. They are problems which if they have, say, n parts, then as you add extra parts, the calculation that you need just grows in proportion to the number of parts. This is like doing your tax return. If you have sources of income, then you will generally find that your tax return takes about times longer to fill in than if you have one source of income, and if you had a st source of income suddenly appear, the time that you will take to add that new source of income to your tax return will just grow in proportion to the number.

So these problems are under control: as they grow in size, the computational time does not do anything very dramatic.

### Samenvatting

However, there is a different type of problem, which is characterised by the adjective hard, and each time you add an extra bit to these problems, the calculation time doubles. So this is a very rapid growth, and very quickly, if you have even a relatively small number of parts, the calculation time and the computation time becomes unrealistically long. Such problems are called intractable.

They are not necessarily very exotic. Here is a simple example. When I was very young, I can remember there was a sort of Christmas present that I did not particularly want to receive. There was the one that you most did not want to receive, which was the so-called sensible present: things like new underwear and socks; and then there were strange little puzzles that aging relatives no doubt played when they were children and thought that you would still enjoy, and one of these was something called a monkey puzzle.

You may remember these. It is really a two-dimensional Rubik cubic. So you have, say, nine pieces, and on each piece, there are four parts of dismembered monkeys shaded in different ways, and you have got to join up the tops of monkeys to the bottoms of monkeys of the same colour. Generally, you managed to do this, and you produced a set of correct joins. But suppose you are a computer, you knew nothing about monkeys, you knew nothing about monkey puzzles, you just knew about matching, and you had to set about by search to solve this problem, what sort of challenge faces you?

Well, you soon find out that this is one of these hard problems. You see that you have got nine cards to match up. After you put the first one down, you have then got eight choices for the next one that you put down, and then after that you have got seven, and then 6. Using simple explanations, it shows the reader that impossibility is a deep and powerful notion; that any Universe complex enough to contain conscious beings will contain limits on what those beings can know about their Universe; that what we cannot know defines reality as surely as what we can know.

### Books with a similar title

One can only wonder how Barrow can possibly make all these [concepts] fit together into a coherent story about the limits to science. Well, contrary to all expectations, he does make them fit, and in only pages! Impossibility is a thoughtful, careful, and insightful book that is presented in a skillfully woven narrative, guiding the reader gently through the thicket of logic, physics, and mathematics Added to basket. Thomas Dixon. The Quark And The Jaguar. Murray Gell-Mann.

- Books with a similar title?
- Cancer Neurology in Clinical Practice; Neurologic Complications of Cancer and Its Treatment?
- ISBN 13: 9780198518907.
- Fundamentals of Forensic Practice. Mental Health and Criminal Law.
- SQL Server 2000 Administration (Book/CD-ROM);
- Sabine Hossenfelder: Backreaction: Book review: "Impossibility" by John D. Barrow.
- Impossibility - E-bok - John D Barrow () | Bokus.

The Scientific Outlook. Bertrand Russell.

The Language of God. Francis Collins.

The Fabric of Reality. David Deutsch. Philosophy of Science. Geoffrey Gorham. The Science Delusion. Rupert Sheldrake. Admirable Evasions. Theodore Dalrymple. Introducing Philosophy of Science. Ziauddin Sardar. The Logic of Scientific Discovery. How to properly store your collection Don't let a few small mistakes erode the value of your growing book collection. A special order item has limited availability and the seller may source this title from another supplier. In this event, there may be a slight delay in shipping and possible variation in description. Our Day return guarantee still applies.

Advanced Book Search Browse by Subject. Make an Offer. Find Rare Books Book Value. Sign up to receive offers and updates: Subscribe. All Rights Reserved.