Co-authored with Ajita Kamal
Editor’s Note: This article has been cited by P.Z. Myers at Pharyngula and Steven Novella at Neurologica, and has been reposted at RichardDawkins.net..
The impulse to see human life as central to the existence of the universe is manifested in the mystical traditions of practically all cultures. It is so fundamental to the way pre-scientific people viewed reality that it may be, to a certain extent, ingrained in the way our psyche has evolved, like the need for meaning and the idea of a supernatural God. As science and reason dismantle the idea of the centrality of human life in the functioning of the objective universe, the emotional impulse has been to resort to finer and finer misinterpretations of the science involved. Mystical thinkers use these misrepresentations of science to paint over the gaps in our scientific understanding of the universe, belittling, in the process, science and its greatest heroes.
In their recent article in The Huffington Post, biologist Robert Lanza and mystic Deepak Chopra put forward their idea that the universe is itself a product of our consciousness, and not the other way around as scientists have been telling us. In essence, these authors are re-inventing idealism, an ancient philosophical concept that fell out of favour with the advent of the scientific revolution. According to the idealists, the mind creates all of reality. Many ancient Eastern and Western philosophical schools subscribe to this idealistic notion of the nature of reality. In the modern context, idealism has been supplemented with a brand of quantum mysticism and relabeled as biocentrism. According to Chopra and Lanza, this idea makes Darwin’s theory of the biological evolution and diversification of life insignificant. Both these men, although they come from different backgrounds, have independently expressed these ideas before with some popular success. In the article under discussion their different styles converge to present a uniquely mystical and bizarre worldview, which we wish to debunk here.
2. Biocentrism Misinterprets Several Scientifically Testable Truths
The scientific background to the biocentrism idea is described in Robert Lanza’s book Biocentrism: How Life and Consciousness Are the Keys to Understanding the True Nature of the Universe, in which Lanza proposes that biology and not physics is the key to understanding the universe. Vital to his proposal is the idea that the universe does not really exist unless it is being observed by a conscious observer. To support this idea, Lanza makes a series of claims:
(a) Lanza questions the conventional idea that space and time exist as objective properties of the universe. In doing this, he argues that space and time are products of human consciousness and do not exist outside of the observer. Indeed, Lanza concludes that everything we perceive is created by the act of perception.
The intent behind this argument is to help consolidate the view that subjective experience is all there is. However, if you dig into what Lanza says it becomes clear that he is positioning the relativistic nature of reality to make it seem incongruous with its objective existence. His reasoning relies on a subtle muddling of the concepts of subjectivity and objectivity. Take, for example, his argument here:
Similarly, temperature perception may vary from species to species, since it is a subjective experience, but the property of matter that causes this subjective experience is objectively real; temperature is determined by the average kinetic energy of the molecules of matter, and there is nothing subjective about that. Give a thermometer to a human and to an ass: they would both record the same value for the temperature at a chosen spot of measurement.
The idea that ‘color’ is a fact of the natural universe has been described by G. E. Moore as a naturalistic fallacy. Also, the idea that color is created by an intelligent creator is a supernaturalistic fallacy. It can be said that the idea that color is created objectively in the universe by the subjective consciousness of the observer is an anthropic fallacy. The correct view is that ‘color’ is the subjective sensory perception by the observer of a certain property of the universe that the observer is a part of.
Time and space receive similar treatment as color and heat in Lanza’s biocentrism. Lanza reaches the conclusion that time does not exist outside the observer by conflating absolute time (which does not exist) with objective time (which does). In 2007 Lanza made his argument using an ancient mathematical riddle known as Zeno’s Arrow paradox. In essence, Zeno’s Arrow paradox involves motion in space-time. Lanza says:
Space is the other property of the universe that Lanza attempts to describe as purely a product of consciousness. He says “Wave your hand through the air. If you take everything away, what’s left? The answer is nothing. So why do we pretend space is a thing”. Again, Einstein’s theory of special relativity provides us with objective predictions that we can look for, such as the bending of space-time. Such events have been observed and verified multiple times. Space is a ‘thing’ as far as the objective universe is concerned.
Lanza says “Space and time are simply the mind’s tools for putting everything together.” This is true , but there is a difference between being the ‘mind’s tools’ and being created by the mind itself. In the first instance the conscious perception of space and time is an experiential trick that the mind uses to make sense of the objective universe, and in the other space and time are actual physical manifestations of the mind. The former is tested and true while the latter is an idealistic notion that is not supported by science. The experiential conception of space and time is different from objective space and time that comprise the universe. This difference is similar to how color is different from photon frequency. The former is subjective while the latter is objective.
Can Lanza deny all the evidence that, whereas we humans emerged on the scene very recently, our Earth and the solar system and the universe at large have been there all along? What about all the objective evidence that life forms have emerged and evolved to greater and greater complexity, resulting in the emergence of humans at a certain stage in the evolutionary history of the Earth? What about all the fossil evidence for how biological and other forms of complexity have been evolving? How can humans arrogate to themselves the power to create objective reality?
Much of Lanza’s idealism arises from a distrust/incomprehension of mathematics. He writes:
(b) Lanza claims that without an external observer, objects remain in a quantum probabilistic state. He conflates this observer with consciousness (which he admits to being “subjective experience”). Therefore, he claims, without consciousness any possible universe will only exist as probabilities. The misunderstanding of quantum theory that Lanza is promoting is addressed further in the article in the section on quantum theory (Section 4.).
(c) The central argument from Lanza is a hard version of the anthropic principle. Lanza says:
3. The Planetary Anthropic Principle
In particle physics and cosmology, we humans have had to introduce ‘best fit’ parameters (fundamental constants) to explain the universe as we see it. Slightly different values for some of the critical parameters would have led to entirely different histories of the cosmos. Why do these parameters have the values they have? According to a differently worded form of the weak version of the anthropic principle stated above: the parameters and the laws of physics can be taken as fixed; it is simply that we humans have appeared in the universe to ask such questions at a time when the conditions were just right for our life.
This version suffices to explain quite a few ‘coincidences’ related to the fact that the conditions for our evolution and existence on the planet Earth happen to be ‘just right’ for that purpose. Life as we know it exists only on planet Earth. Here is a list of favourable necessary conditions for its existence, courtesy Dawkins (2007):
What we have listed above are just some necessary conditions. They are by no means sufficient conditions as well. With all the above conditions available on Earth, another highly improbable set of phenomena occurred, namely the actual origin of life. This origin was a set of highly improbable (but not impossible) set of chemical events, leading to the emergence of a mechanism for heredity. This mechanism came in the form of emergence of some kind of genetic molecules like RNA. This was a highly improbable thing to happen, but our existence implies that such an event, or a sequence of events, did indeed take place. Once life had originated, Darwinian evolution of complexity through natural selection (which is not a highly improbable set of events) did the rest and here we are, discussing such questions.
Like the origin of life, another extremely improbable event (or a set of events) was the emergence of the sophisticated eukaryotic cell (on which the life of we humans is based). We invoke the anthropic principle again to say that, no matter how improbable such an event was statistically, it did indeed happen; otherwise we humans would not be here. The occurrence of all such one-off highly improbable events can be explained by the anthropic principle.
Before we discuss the cosmological or ‘strong’ version of the anthropic principle, it is helpful to recapitulate the basics of quantum theory.
4. Quantum Theory
In conventional quantum mechanics we use wave functions, ψ, to represent quantum states. The wave function plays a role somewhat similar to that of trajectories in classical mechanics. The Schrödinger equation describes how the wave function of a quantum system evolves with time. This equation predicts a smooth and deterministic time-evolution of the wave function, with no discontinuities or randomness. Just as trajectories in classical mechanics describe the evolution of a system in phase space from one time step to the next, the Schrödinger equation transforms the wave function at time t0 (corresponding to a specific point in phase space) to its value ψ(t) at another time t. The physical interpretation of the wave function is that |ψ|2is the probability of occurrence of the state of the system at a given point in phase space.
An elementary particle can exist as a superposition of two or more alternative quantum states. Suppose its energy can take two values, E1 and E2. Let u1 and u2 denote the corresponding wave functions. The quantum interpretation is that the system exists in both the states, with u12and u22 as the respective probabilities. Thus we move from a pure state to a mixture or ensemble of states. What is more, something striking happens when we humans observe such a system, say an electron, with an instrument. At the moment of observation, the wave function appears to collapse into only one of the possible alternative states, the superposition of which was described by the wave function before the event of measurement. That is, a quantum state becomes decoherent when measured or monitored by the environment. This amounts to the introduction of a discontinuity in the smooth evolution of the wave function with time.
This apparent collapse of the wave function does not follow from the mathematics of the Schrödinger equation, and was, in the early stages of the history of quantum mechanics, introduced ‘by hand’ as an additional postulate. That is, one chose to introduce the interpretation that there is a collapse of the wave function to the state actually detected by the measurement in the ‘real’ world, to the exclusion of other states represented in the original wave function. This (unsatisfactory) dualistic interpretation of quantum mechanics for dealing with the measurement problem was suggested by Bohr and Heisenberg at a conference in Copenhagen in 1927, and is known as the Copenhagen interpretation.
Another basic notion in standard quantum mechanics is that of time asymmetry. In classical mechanics we make the reasonable-looking assumption that, once we have formulated the Newtonian (or equivalent) equations of motion for a system, the future states are determined by the initial conditions. In fact, we can not only calculate the future conditions from the initial conditions, we can even calculate the initial conditions if the future conditions or states are known. This is time symmetry. In quantum mechanics, the uncertainty principle destroys the time symmetry. There can be now a one-to-many relationship between initial and final conditions. Two identical particles, in identical initial conditions, need not be observed to be in the same final conditions at a later time.
Multiple universes
Hugh Everett, during the mid-1950s, expressed total dissatisfaction with the Copenhagen interpretation: ‘The Copenhagen Interpretation is hopelessly incomplete because of its a priori reliance on classical physics … as well as a philosophic monstrosity with a “reality” concept for the macroscopic world and denial of the same for the microcosm.’ The Copenhagen interpretation implied that equations of quantum mechanics apply only to the microscopic world, and cease to be relevant in the macroscopic or ‘real’ world.
Everett offered a new interpretation, which presaged the modern ideas of quantum decoherence. Everett’s ‘many worlds’ interpretation of quantum mechanics is now taken more seriously, although not entirely in its original form. He simply let the mathematics of the quantum theory show the way for understanding logically the interface between the microscopic world and the macroscopic world. He made the observer an integral part of the system being observed, and introduced a universal wave function that applies comprehensively to the totality of the system being observed and the observer. This means that even macroscopic objects exist as quantum superpositions of all allowed quantum states. There is thus no need for the discontinuity of a wave-function collapse when a measurement is made on the microscopic quantum system in a macroscopic world.
Everett examined the question: What would things be like if no contributing quantum states to a superposition of states are banished artificially after seeing the results of an observation? He proved that the wave function of the observer would then bifurcate at each interaction of the observer with the system being observed. Suppose an electron can have two possible quantum states A and B, and its wave function is a linear superposition of these two. The evolution of the composite or universal wave function describing the electron and the observer would then contain two branches corresponding to each of the states A and B. Each branch has a copy of the observer, one which sees state A as a result of the measurement, and the other which sees state B. In accordance with the all-important principle of linear superposition in quantum mechanics, the branches do not influence each other, and each embarks on a different future (or a different ‘universe’), independent of the other. The copy of the observer in each universe is oblivious to the existence of other copies of itself and other universes, although the ‘full reality’ is that each possibility has actually happened. This reasoning can be made more abstract and general by removing the distinction between the observer and the observed, and stating that, at each interaction among the components of the composite system, the total or universal wave function would bifurcate as described above, giving rise to multiple universes or many worlds.
A modern and somewhat different version of this interpretation of quantum mechanics introduces the term quantum decoherence to rationalise how the branches become independent, and how each turns out to represent our classical or macroscopic reality. Quantum computing is now a reality, and it is based on such understanding of quantum mechanics.
Parallel histories
Richard Feynman formulated a different version of the many-worlds idea, and spoke in terms of multiple or parallel histories of the universe (rather than multiple worlds or universes). This work, done after World War II, fetched him the Nobel Prize in 1965. Feynman, whose path integrals are well known in quantum mechanics, suggested that, when a particle goes from a point P to a point Q in phase space, it does not have just a single unique trajectory or history. [It should be noted that, although we normally associate the word 'history' only with past events, history in the present context can refer to both the past and the future. A history is merely a narrative of a time sequence of event - past, present, or future.] Feynman proposed that every possible path or trajectory from P to Q in space-time is a candidate history, with an associated probability. The wave function for every such trajectory has an amplitude and a phase. The path integral for going from P to Q is obtained as the weighted vector sum, or integration over all such individual paths or histories. Feynman’s rules for assigning the amplitudes and phases for computing the sum over histories happen to be such that the effects of all except the one actually measured for a macroscopic object get cancelled out. For sub-microscopic particles, of course, the cancellation is far from complete, and there are indeed competing histories or parallel universes.
Quantum Darwinism
A different resolution to the problem of interfacing the microscopic quantum description of reality with macroscopic classical reality is offered by what has been called ‘quantum Darwinism.’ This formalism does not require the existence of an observer as a witness of what occurs in the universe. Instead, the environment is the witness. A selective witness at that, rather like natural selection in Darwin’s theory of evolution. The environment determines which quantum properties are the fittest to survive (and be observed, for example, by humans). Many copies of the fitter quantum property get created in the entire environment (‘redundancy’). When humans make a measurement, there is a much greater chance that they would all observe and measure the fittest solution of the Schrödinger equation, to the exclusion (or near exclusion) of other possible outcomes of the measurement experiment.
In a computer experiment, Blume-Kohout and Zurek (2007) demonstrated quantum Darwinism (http://www.arxiv.org/abs/0704.3615) in zero-temperature quantum Brownian motion (QBM). A harmonic oscillator system (S) is made to evolve in contact with a bath (ε) of harmonic oscillators. The question asked is: How much information about S can an observer extract from the bath ε? ε consists of subenvironments εi; i = 1, 2, 3, … Each observer has exclusive access to a fragment F consisting of m subenvironments. The so-called ‘mutual information entropy’ is calculated from the quantum mutual information between S and F.
An important result of this approach is that substantial redundancy appears in the QBM model; i.e., multiple redundant records get made in the environment. As the authors state, this redundancy accounts for the objectivity and the classicality; the environment is a witness, holding many copies of the evidence. When humans make a measurement, it is most likely that they would all interact with one of the stable recorded copies, rather than directly with the actual quantum system, and thus observe and measure the classical value, to the exclusion of other possible outcomes of the measurement experiments.
Gell-Mann’s coarse-graining interpretation of quantum mechanics
For this interpretation, let us first understand the difference between fine-grained and coarse-grained histories of the universe. Completely
fine-grained histories of the universe are histories that give as complete a description as possible of the entire universe at every moment of time. Consider a simplified universe in which elementary particles have no attributes other than positions and momenta, and in which the indistinguishability among particles of a given type is ignored. Then, one kind of fine-grained history of the simplified universe would be one in which the positions of all the particles are known at all times. Unlike classical mechanics which is deterministic, quantum mechanics is probabilistic. One might think that we can write down the probability for each possible fine-grained history. But this is not so. It turns out that the ‘interference’ terms between fine-grained histories do not usually cancel out, and we cannot assign probabilities to the fine-grained histories. One has to resort to coarse-graining to be able to assign probabilities to the histories. Murray Gell-Mann and coworkers applied this approach to a description of the quantum-mechanical histories of the universe. It was shown that the interference terms get cancelled out on coarse-graining. Thus we can work directly with wave functions, rather than having to work with wave-function amplitudes, and then there is no problem interfacing the microscopic description with the macroscopic world of measurements etc.
Gell-Mann also emphasized the point that the term ‘many worlds or universes’ should be substituted by ‘many alternative histories of the universe’, with the further proviso that the many histories are not ‘equally real’; rather they have different probabilities of occurrence.
5. The Cosmological Anthropic Principle
The chemical elements needed for life were forged in stars, and then flung far into space through supernova explosions. This required a certain amount of time. Therefore the universe cannot be younger than the lifetime of stars. The universe cannot be too old either, because then all the stars would be ‘dead’. Thus, life can exist only when the universe has just the age that we humans measure it to be, and has just the physical constants that we measure them to be.
It has been calculated that if the laws and fundamental constants of our universe had been even slightly different from what they are, life as we know it would not have been possible. Rees (1999), in the book Just Six Numbers, listed six fundamental constants which together determine the universe as we see it. Their fine-tuned mutual values are such that even a slightly different set of these six numbers would have been inimical to our emergence and existence. Consideration of just one of these constants, namely the strength of the strong interaction (which determines the binding energies of nuclei), is enough to make the point. It is defined as that fraction of the mass of an atom of hydrogen which is released as energy when hydrogen atoms fuse to form an atom of helium. Its value is 0.007, which is just right (give or take a small acceptable range) for any known chemistry to exist, and no chemistry means no life. Our chemistry is based on reactions among the 90-odd elements. Hydrogen is the simplest among them, and the first to occur in the periodic table. All the other elements in our universe got synthesised by fusion of hydrogen atoms. This nuclear fusion depends on the strength of the strong or nuclear interaction, and also on the ability of a system to overcome the intense Coulomb repulsion between the fusing nuclei. The creation of intense temperatures is one way of overcoming the Coulomb repulsion. A small star like our Sun has a temperature high enough for the production of only helium from hydrogen. The other elements in the periodic table must have been made in the much hotter interiors of stars larger than our Sun. These big stars may explode as supernovas, sending their contents as stellar dust clouds, which eventually condense, creating new stars and planets, including our own Earth. That is how our Earth came to have the 90-odd elements so crucial to the chemistry of our life. The value 0.007 for the strong interaction determined the upper limit on the mass number of the elements we have here on Earth and elsewhere in our universe. A value of, say, 0.006, would mean that the universe would contain nothing but hydrogen, making impossible any chemistry whatsoever. And if it were too large, say 0.008, all the hydrogen would have disappeared by fusing into heavier elements. No hydrogen would mean no life as we know it; in particular there would be no water without hydrogen.
Similarly for the other finely-tuned fundamental constants of our universe. Existence of humans has become possible because the values of the fundamental constants are what they are; had they been different, we would not exist; that is how the anthropic principle (planetary or cosmological, weak or strong) should be stated. The weak version is the only valid version of the principle.
But why does the universe have these values for the fundamental constants, and not some other set of values? Different physicists and cosmologists have tried to answer this question in different ways, and the investigations go on. One possibility is that there are multiple universes, and we are in one just right for our existence. Another idea is based on string theory.
6. String Theory and the Anthropic Principle
A ‘string’ is a fundamental 1-dimensional object, postulated to replace the concept of structureless elementary particles. Different vibrational modes of a string give rise to the various elementary particles (including the graviton). String theory aims to unite quantum mechanics and the general theory of relativity, and is thus expected to be a unified ‘theory of everything.’ When this theory makes sufficient headway, the six fundamental constants identified by Rees will turn out to be inter-related, and not free to have any arbitrary values. But this still begs the question asked above: Why this particular set of fundamental constants, and not another? Hawking (1988) asked an even deeper question: ‘Even if there is only one possible unified theory, it is just a set of rules and equations. What is it that breathes fire into the equations and makes a universe for them to describe? The usual approach of science of constructing a mathematical model cannot answer the questions of why there should be a universe for the model to describe. Why does the universe go to all the bother of existing?’
Our universe is believed to have started at the big bang, shown by Hawking and Penrose in the 1970s to be a singularity point is space-time (some physicists disagree with the singularity idea). The evidence for this seems to be that the universe has been expanding (‘inflating’) ever since then. It so happens that we have no knowledge of the set of initial boundary conditions at the moment of the big bang. Moreover, as Hawking and Hertog said in 2006, things could be a little simpler ‘if one knew that the universe was set going in a particular way in either the finite or infinite past.’ Therefore Hawking and coworkers argued that it is not possible to adopt the bottom up approach to cosmology wherein one starts at the beginning of time, applies the laws of physics, calculates how the universe would evolve with time, and then just hopes that it would turn out to be something like the universe we live in. Consequently a top down approach has been advocated by them (remember, this is just a model), wherein we start with the present and work our way backwards into the past. According to Hawking and Hertog (2006), there are many possible histories (corresponding to successive unpredictable bifurcations in phase space), and the universe has lived them all. Not only that, there is also an anthropic angle to this scenario:
As mentioned above, Stephen Hawking and Roger Penrose had proved that the moment of the big bang was a singularity, i.e. a point where gravity must have been so strong as to curve space and time in an unimaginably strong way. Under such extreme conditions our present formulation of general relativity would be inadequate. A proper quantum theory of gravity is still an elusive proposition. But, as suggested by Hawking and Hertog in 2006, because of the small size of the universe at and just after the big bang, quantum effects must have been very important. The origin of the universe must have been a quantum event. This statement has several weird-looking consequences. The basic idea is to incorporate the consequences of Heisenberg’s uncertainty principle when considering the evolution of the (very small) early universe, and combine it with Feynman’s sum-over-histories approach. This means that, starting from configuration A, the early universe could go not only to B, but also to other configurations B’, B”, etc. (as permitted by the quantum-mechanical uncertainty principle), and one has to do a sum-over-histories for each of the possibilities AB, AB’, AB”, … And each such branch corresponds to a different evolution of the universe (with different cosmological and other fundamental constants), only one or a few of them corresponding to a universe in which we humans could evolve and survive. This provides a satisfactory answer to the question: ‘why does the universe have these values for the fundamental constants, and not some other set of values?’.
The statement ‘humans exist in a universe in which their existence is possible’ is practically a tautology. How can humans exist in a universe which has values of fundamental constants which are not compatible with their existence?! Stop joking, Dr. Lanza.
The other possible universes (or histories) also exist, each with a specific probability. Our observations of the world are determining the history that we see. The fact that we are there and making observations assigns to ourselves a particular history.
Let A denote the beginning of time (if there is any), and B denote now. The state of the universe at point B can be broadly specified by recognizing the important aspects of the world around us: There are three large dimensions in space, the geometry of space is almost flat, the universe is expanding, etc. The problem is that we have no way of specifying point A. So how do we perform the various sums over histories? An interesting point of the quantum mechanical sums-over-histories theory is that the answers come out right when we work with imaginary (or complex) time, rather than real time. The work of Hawking and Hertog (2006) has shown that the imaginary-time approach is crucial for understanding the origin of the universe. When the histories of the universe are added up in imaginary time, time gets transformed into space. It follows from this work that when the universe was very small, it had four spatial dimensions, and none for time. In terms of the history of the universe, it means that there is no point A, and that the universe has no definable starting point or initial boundary conditions. In this no-boundary scheme of things, we can only start from point B and work our way backwards (the top-down approach).
This approach also solves the fine-tuning problem of cosmology. Why has the universe a particular inflation history? Why does the cosmological constant (which determines the rate of inflation) have the value it has? Why did the early universe have a particular ‘fine-tuned’ initial configuration and a specific (fast) initial rate of inflation? In the no-boundary scenario there is no need to define an initial state. And there is no need for any fine tuning. What is more, the very fact of inflation, as against no inflation, follows from the theory as the most probable scenario.
String theory defines a near-infinity of multiple universes. This goes well with the anthropic-principle idea that, out of the multiple choices for the fundamental constants (including the cosmological constant) for each such universe, we live in the universe that makes our existence possible. In the language of string theory, there are multiple ‘pocket’ universes that branch off from one another, each branch having a different set of fundamental constants. Naturally, we are living in one with just the right fundamental constants for our existence.
While many physicists feel uncomfortable with this unconfirmed world view, Hawking and Hertog (2006) have pointed out that the picture of a never-ending proliferation of pocket universes is meaningful only from the point of view of an observer outside a universe, and that situation (observer outside a universe) is impossible. This means that parallel pocket universes can have no effect on an actual observer inside a particular pocket.
Hawking’s work has several other implications as well. For example, in his scheme of things the string theory ‘landscape’ is populated by the set of all possible histories. All possible versions of a universe exist in a state of quantum superposition. When we humans choose to make a measurement, a subset of histories that share the specific property measured gets selected. Our version of the history of the universe is determined by that subset of histories. No wonder the cosmological anthropic principle holds. How can any rational person use the anthropic principle to justify biocentrism?
Hawking and Hertog’s theory can be tested by experiment, although that is not going to be easy. Its invocation of Heisenberg’s uncertainty principle during the early moments of the universe, and the consequent quantum fluctuations, leads to a prediction of specific fluctuations in the cosmic microwave background, and in the early spectrum of gravitational waves. These predicted fluctuations arise because there is an uncertainty in the exact shape of the early universe, which is influenced, among other things, by other histories with similar geometries. Unprecedented precision will be required for testing these predictions. In any case, gravitation waves have not even been detected yet.
In any case, good scientists are having a serious debate about the correct interpretation of the data available about life and the universe. While this goes on, non-scientists and charlatans cannot be permitted to twist facts to satisfy the hunger of humans for the feel-good or feel-important factor. The scientific method is such that scientists feel good when they are doing good science.
7. Wolfram’s Universe
Stephen Wolfram has emphasized the role of computational irreducibility when it comes to trying to understand our universe. The notion of probability (as opposed to certainty) is inherent in our worldview if quantum theory is a valid theory. Wolfram argues that this may not be a correct worldview. He does not rule out the possibility that there really is just a single, definite, rule for our universe which, in a sense, deterministically specifies how everything in our universe happens. Things only look probabilistic because of the high degree of complexity involved, particularly regarding the very structure and connectivity of space and time. It is computational irreducibility that sometimes makes certain things look incomprehensible or probabilistic, rather than deterministic. Since we are restricted to doing the computational work within the universe, we cannot expect to ‘outrun’ the universe, and derive knowledge any faster than just by watching what the universe actually does.
Wolfram points out that there is relief from this tyranny of computational irreducibility only in the patches or islands of computational reducibility. It is in those patches that essentially all of our current physics lies. In natural science we usually have to be content with making models that are approximations. Of course, we have to try to make sure that we have managed to capture all the features that are essential for some particular purpose. But when it comes to finding an ultimate model for the universe, we must find a precise and exact representation of the universe, with no approximations. This would amount to reducing all physics to mathematics. But even if we could do that and know the ultimate rule, we are still going to be confronted with the problem of computational irreducibility. So, at some level, to know what will happen, we just have to watch and see history unfold.
8. The Nature of Consciousness
One criticism of biocentrism comes from the philosopher Daniel Dennett, who says “It looks like an opposite of a theory, because he doesn’t explain how consciousness happens at all. He’s stopping where the fun begins.”
The logic behind this criticism is obvious. Without a descriptive explanation for consciousness and how it ‘creates’ the universe, biocentrism is not useful. In essence, Lanza calls for the abandonment of modern theoretical physics and its replacement with a magical solution. Here are a few questions that one might ask of the idea:
Daniel Dennett’s criticism of biocentrism centres on Lanza’s non-explanation of the nature of consciousness. In fact, even from a biological
perspective Lanza’s conception of consciousness is unclear. For example, he consistently equates consciousness with subjective experience while stressing its independence from the objective universe (see Lanza’s quote below). This is an appeal to the widespread but erroneous intuition towards Cartesian Dualism. In this view, consciousness (subjective experience) belongs to a different plane of reality than the one on which the material universe is constructed. Lanza requires this general definition of consciousness to construct his theory of biocentrism. He uses it in the same way that Descartes used it – as a semantic tool to deconstruct reality. In fact, Lanza’s theory of biocentrism is a sophisticated non-explanation for the ‘brain in a vat’ problem that plagued philosophers for centuries. However, instead of subscribing to Cartesian Dualism, he attempts a Cartesian Monism by invoking quantum mechanics. To be exact, his view is Monistic Idealism - the idea that consciousness is everything- but the Cartesian bias is an essential element in his arguments.
In a dualistic or idealistic context, Lanza’s definition of consciousness as subjective experience may be acceptable. However, Lanza’s definition is incomplete from a scientific perspective. The truth is that there are difficulties in analysing consciousness empirically. In scientific terms, consciousness is a ‘hard problem’, meaning that its complete subjective nature places it beyond direct objective study. Lanza exploits this difficulty to deny science any understanding of consciousness.
Lanza trivializes the current debate in the scientific community about the nature of consciousness when he says:
There is no need to view consciousness as such a mystery. There are some contemporary models of consciousness that are quite explanatory, presenting promising avenues for studying how the brain works. Daniel Dennett’s Multiple Drafts Model is one. According to Dennett, there is nothing mystical about consciousness. It is an illusion created by tricks in the brain. The biological machinery behind the tricks that create the illusion of consciousness is the product of successive evolutionary processes, beginning with the development of primitive physiological reactions to external stimuli. In the context of modern humans, consciousness consists of a highly dynamic process of information exchange in the brain. Multiple sets of sensory information, memories and emotional cues are competing with each other at all times in the brain, but at any one instant only one set of these factors dominates the brain. At the next instant, another set of slightly different factors are dominant. At all instants, multiple sets of information are competing with each other for dominance. This creates the illusion of a continuous stream of thoughts and experiences, leading to the intuition that consciousness comprises the entirety of the voluntary mental function of the individual. There are other materialist models, such as Marvin Minsky’s view of the brain as an emotional machine, that provide us with ways of approaching the problem from a scientific perspective without resorting to mysticism.
Consciousness is not something that requires a restructuring of objective reality. It is a subjective illusion on one level, and the mechanistic outcome of evolutionary processes on another.
Deepak Chopra, Lanza’s coauthor in the article, is known for making bold claims about the nature of the universe. He peddles a form of new-age Hinduism. Chopra’s ideas about a conscious universe are derived from an interpretation of Vedic teachings. He supplements this new-age Hinduism with ideas from a minority view among physicists that the Copenhagen Interpretation implies a conscious universe. This view is expounded by Amit Goswamiin his book The Self-Aware Universe. In turn, Goswami and his peers were influenced by Fritjof Capra’s book The Tao of Physics in which the author attempts to reconcile reductionist science with Eastern mystical philosophies. Much of modern quantum mysticism in the popular culture can be traced back to Capra. Chopra’s philosophy is essentially a distillation of Capra’s work combined with a popular marketing strategy to sell all kinds of pseudoscientific garbage.
Considering Chopra’s reputation in the scientific community for making absurd quack claims about every subject under the sun, one must wonder about the strange pairing between the two writers. With Lanza’s experience in biomedical research, he could not possibly be in agreement with Chopra’s brand of holistic healing and quantum mysticism. Rather, it seems likely that this is an arrangement of convenience. If you look at what drives the two men, a mutually reinforced disenchantment with Darwin’s ideas emerges as a strong motive behind the pairing. Both Chopra and Lanza are disillusioned with a certain perceived implication of Darwinian evolution on human existence – that the meaning of life is inconsequential to the universe. Evolutionary biology upholds the materialist view of modern science that consciousness is a product of purely inanimate matter assembling in highly complex states. Such a view is disillusioning to anyone who craves a more central role for the human ego in determining one’s reality. The view that human life is central to existence is found in most philosophical and religious traditions. This view is so fundamental to our nature that we can say it is an intuitive reaction to the very condition of being conscious. It has traditionally been the powerful driving force behind philosophers, poets, priests, mystics and scholars of history. Darwin dismantled the idea in one clean stroke. Therefore, Darwin became the enemy. The entire theory of biocentrism is an attempt to ingrain the idea of human destiny into popular science.
The title of Chopra and Lanza’s article is “Evolution Reigns, but Darwin Outmoded”. This may mislead you to think that the article is about new discoveries in biological evolution. On reading the article, however, it becomes apparent that the authors are not talking about biological evolution at all. It is relevant to note that not once in their article do they say how Darwin has been outmoded.
Towards the end of their article, Chopra and Lanza say:
Interestingly, Chopra has demonstrated his dislike and ignorance of biological evolution multiple times. Here are some prize quotations from the woo-master himself (skip these if you feel an aneurysm coming):
Chopra’s brand of mysticism gets its claimed legitimacy from science and its virulence from discrediting science’s core principles. He continues this practice through his association with Robert Lanza. Both Chopra and Lanza seem to be disillusioned by the perceived emptiness of a non-directional evolutionary reality. Chopra has invested much time and effort in promoting the idea that consciousness in a property of the universe itself. He finds in Lanza a keen mind with an inclination towards a similar dislike for a perceived lack of anthropocentric meaning in the nature of biological life as described by Darwin’s theory of evolution by natural selection.
10. Conclusions
Let us recapitulate the main points:
(a) Space and time exist, even though they are relative and not absolute.
(b) Modern quantum theory, long after the now-discredited Copenhagen interpretation, is consistent with the idea of an objective universe that exists without a conscious observer.
(c) Lanza and Chopra misunderstand and misuse the anthropic principle.
(d) The biocentrism approach does not provide any new information about the nature of consciousness, and relies on ignoring recent advances in understanding consciousness from a scientific perspective.
(e) Both authors show thinly-veiled disdain for Darwin, while not actually addressing his science in the article. Chopra has demonstrated his utter ignorance of evolution multiple times.
Modern physics is a vast and multi-layered web that stretches over the entire deck of cards. All other natural sciences – all truths that exist in the material world- are interrelated, held together by the mathematical reality of physics. Fundamental theories in physics are supported by multiple lines of evidence from many different scientific disciplines, developed and tested over decades. Clearly, those who propose new theories that purport to redefine fundamental assumptions or paradigms in physics have their work cut out for them. Our contention is that the theory of biocentrism, if analysed properly, does not hold up to scrutiny. It is not the paradigm change that it claims to be. It is also our view that one can find much meaning, beauty and purpose in a naturalistic view of the universe, without having to resort to mystical notions of reality.
Dr. Vinod Kumar Wadhawanis a Raja Ramanna Fellow at theBhabha Atomic Research Centre, Mumbai and an Associate Editor of the journalPHASE TRANSITIONS.
Editor’s Note: This article has been cited by P.Z. Myers at Pharyngula and Steven Novella at Neurologica, and has been reposted at RichardDawkins.net..
“It is almost irresistible for humans to believe that we have some special relation to the universe, that human life is not just a more-or-less farcical outcome of a chain of accidents reaching back to the first three minutes, but that we were somehow built in from the beginning.”
-Steven Weinberg
“You are here to enable the divine purpose of the universe to unfold. That is how important you are.”1. Introduction
-Eckhart Tolle
The impulse to see human life as central to the existence of the universe is manifested in the mystical traditions of practically all cultures. It is so fundamental to the way pre-scientific people viewed reality that it may be, to a certain extent, ingrained in the way our psyche has evolved, like the need for meaning and the idea of a supernatural God. As science and reason dismantle the idea of the centrality of human life in the functioning of the objective universe, the emotional impulse has been to resort to finer and finer misinterpretations of the science involved. Mystical thinkers use these misrepresentations of science to paint over the gaps in our scientific understanding of the universe, belittling, in the process, science and its greatest heroes.
In their recent article in The Huffington Post, biologist Robert Lanza and mystic Deepak Chopra put forward their idea that the universe is itself a product of our consciousness, and not the other way around as scientists have been telling us. In essence, these authors are re-inventing idealism, an ancient philosophical concept that fell out of favour with the advent of the scientific revolution. According to the idealists, the mind creates all of reality. Many ancient Eastern and Western philosophical schools subscribe to this idealistic notion of the nature of reality. In the modern context, idealism has been supplemented with a brand of quantum mysticism and relabeled as biocentrism. According to Chopra and Lanza, this idea makes Darwin’s theory of the biological evolution and diversification of life insignificant. Both these men, although they come from different backgrounds, have independently expressed these ideas before with some popular success. In the article under discussion their different styles converge to present a uniquely mystical and bizarre worldview, which we wish to debunk here.
2. Biocentrism Misinterprets Several Scientifically Testable Truths
The scientific background to the biocentrism idea is described in Robert Lanza’s book Biocentrism: How Life and Consciousness Are the Keys to Understanding the True Nature of the Universe, in which Lanza proposes that biology and not physics is the key to understanding the universe. Vital to his proposal is the idea that the universe does not really exist unless it is being observed by a conscious observer. To support this idea, Lanza makes a series of claims:
(a) Lanza questions the conventional idea that space and time exist as objective properties of the universe. In doing this, he argues that space and time are products of human consciousness and do not exist outside of the observer. Indeed, Lanza concludes that everything we perceive is created by the act of perception.
The intent behind this argument is to help consolidate the view that subjective experience is all there is. However, if you dig into what Lanza says it becomes clear that he is positioning the relativistic nature of reality to make it seem incongruous with its objective existence. His reasoning relies on a subtle muddling of the concepts of subjectivity and objectivity. Take, for example, his argument here:
“Consider the color and brightness of everything you see ‘out there.’ On its own, light doesn’t have any color or brightness at all. The unquestionable reality is that nothing remotely resembling what you see could be present without your consciousness. Consider the weather: We step outside and see a blue sky – but the cells in our brain could easily be changed so we ‘see’ red or green instead. We think it feels hot and humid, but to a tropical frog it would feel cold and dry. In any case, you get the point. This logic applies to virtually everything.“There is only some partial truth to Lanza’s claims. Color is an experiential truth – that is, it is a descriptive phenomenon that lies outside of objective reality. No physicist will deny this. However, the physical properties of light that are responsible for color are characteristics of the natural universe. Therefore, the sensory experience of color is subjective, but the properties of light responsible for that sensory experience are objectively true. The mind does not create the natural phenomenon itself; it creates a subjective experience or a representation of the phenomenon.
Similarly, temperature perception may vary from species to species, since it is a subjective experience, but the property of matter that causes this subjective experience is objectively real; temperature is determined by the average kinetic energy of the molecules of matter, and there is nothing subjective about that. Give a thermometer to a human and to an ass: they would both record the same value for the temperature at a chosen spot of measurement.
The idea that ‘color’ is a fact of the natural universe has been described by G. E. Moore as a naturalistic fallacy. Also, the idea that color is created by an intelligent creator is a supernaturalistic fallacy. It can be said that the idea that color is created objectively in the universe by the subjective consciousness of the observer is an anthropic fallacy. The correct view is that ‘color’ is the subjective sensory perception by the observer of a certain property of the universe that the observer is a part of.
Time and space receive similar treatment as color and heat in Lanza’s biocentrism. Lanza reaches the conclusion that time does not exist outside the observer by conflating absolute time (which does not exist) with objective time (which does). In 2007 Lanza made his argument using an ancient mathematical riddle known as Zeno’s Arrow paradox. In essence, Zeno’s Arrow paradox involves motion in space-time. Lanza says:
“Even time itself is not exempted from biocentrism. Our sense of the forward motion of time is really the result of an infinite number of decisions that only seem to be a smooth continuous path. At each moment we are at the edge of a paradox known as The Arrow, first described 2,500 years ago by the philosopher Zeno of Elea. Starting logically with the premise that nothing can be in two places at once, he reasoned that an arrow is only in one place during any given instance of its flight. But if it is in only one place, it must be at rest. The arrow must then be at rest at every moment of its flight. Logically, motion is impossible. But is motion impossible? Or rather, is this analogy proof that the forward motion of time is not a feature of the external world but a projection of something within us? Time is not an absolute reality but an aspect of our consciousness.”In a more recent article Lanza brings up the implications of special relativity on Zeno’s Arrow paradox. He writes:
“Consider a film of an archery tournament. An archer shoots an arrow and the camera follows its trajectory. Suddenly the projector stops on a single frame — you stare at the image of an arrow in mid-flight. The pause enables you to know the position of the arrow with great accuracy, but it’s going nowhere; its velocity is no longer known. This is the fuzziness described by in the uncertainty principle: sharpness in one parameter induces blurriness in the other. All of this makes perfect sense from a biocentric perspective. Everything we perceive is actively being reconstructed inside our heads. Time is simply the summation of the ‘frames’ occurring inside the mind. But change doesn’t mean there is an actual invisible matrix called “time” in which changes occur. That is just our own way of making sense of things.”In the first case Lanza seems to state that motion is logically impossible (which is a pre-relativistic view of the paradox) and in the next case he mentions that uncertainty is present in the system (a post-relativistic model of motion). In both cases, however, Lanza’s conclusion is the same – biocentrism is true for time. No matter what the facts about the nature of time, Lanza concludes that time is not real. His model is unfalsifiable and therefore cannot be a part of science. What Lanza doesn’t let on is that Einstein’s special-relativity theory removes the possibility of absolute time, not of time itself. Zeno’s Arrow paradox is resolved by replacing the idea of absolute time with Einstein’s relativistic coupling of space and time. Space-time has an uncertainty in quantum mechanics, but it is not nonexistent. The idea of time as a series of sequential events that we perceive and put together in our heads is an experiential version of time. This is the way we have evolved to perceive time. This experiential version of time seems absolute, because we evolved to perceive it that way. However, in reality time is relative. This is a fundamental fact of modern physics. Time does exist outside of the observer, but allows us only a narrow perception of its true nature.
Space is the other property of the universe that Lanza attempts to describe as purely a product of consciousness. He says “Wave your hand through the air. If you take everything away, what’s left? The answer is nothing. So why do we pretend space is a thing”. Again, Einstein’s theory of special relativity provides us with objective predictions that we can look for, such as the bending of space-time. Such events have been observed and verified multiple times. Space is a ‘thing’ as far as the objective universe is concerned.
Lanza says “Space and time are simply the mind’s tools for putting everything together.” This is true , but there is a difference between being the ‘mind’s tools’ and being created by the mind itself. In the first instance the conscious perception of space and time is an experiential trick that the mind uses to make sense of the objective universe, and in the other space and time are actual physical manifestations of the mind. The former is tested and true while the latter is an idealistic notion that is not supported by science. The experiential conception of space and time is different from objective space and time that comprise the universe. This difference is similar to how color is different from photon frequency. The former is subjective while the latter is objective.
Can Lanza deny all the evidence that, whereas we humans emerged on the scene very recently, our Earth and the solar system and the universe at large have been there all along? What about all the objective evidence that life forms have emerged and evolved to greater and greater complexity, resulting in the emergence of humans at a certain stage in the evolutionary history of the Earth? What about all the fossil evidence for how biological and other forms of complexity have been evolving? How can humans arrogate to themselves the power to create objective reality?
Much of Lanza’s idealism arises from a distrust/incomprehension of mathematics. He writes:
“In order to account for why space and time were relative to the observer, Einstein assigned tortuous mathematical properties to an invisible, intangible entity that cannot be seen or touched. This folly continues with the advent of quantum mechanics.”Why should the laws of Nature ‘bother’ about whether you can touch something or not? The laws of Nature have been there long before Lanza appeared on the scene. Since he cannot visualize how the mathematics describes an objective universe outside of experience, Lanza announces that reality itself does not exist unless created by the act of observation. Some cheek!
(b) Lanza claims that without an external observer, objects remain in a quantum probabilistic state. He conflates this observer with consciousness (which he admits to being “subjective experience”). Therefore, he claims, without consciousness any possible universe will only exist as probabilities. The misunderstanding of quantum theory that Lanza is promoting is addressed further in the article in the section on quantum theory (Section 4.).
(c) The central argument from Lanza is a hard version of the anthropic principle. Lanza says:
“Why, for instance, are the laws of nature exactly balanced for life to exist? There are over 200 physical parameters within the solar system and universe so exact that it strains credulity to propose that they are random — even if that is exactly what contemporary physics baldly suggests. These fundamental constants (like the strength of gravity) are not predicted by any theory — all seem to be carefully chosen, often with great precision, to allow for existence of life. Tweak any of them and you never existed. “This reveals a total lack of understanding of what the anthropic principle really says. So let us take a good, detailed, look at this principle.
3. The Planetary Anthropic Principle
And the beauty of the anthropic principle is that it tells us, against all intuition, that a chemical model need only predict that life will arise on one planet in a billion billion to give us a good and entirely satisfying explanation for the presence of life here.The anthropic principle was first enunciated by the mathematician Brandon Carter in 1974. Further elaboration and consolidation came in 1986 in the form of a book The Anthropic Cosmological Principle by Barrow and Tipler. There are quite a few versions of the principle doing the rounds. The scientifically acceptable version, also called the ‘weak’ (or planetary) version, states that: The particular universe in which we find ourselves possesses the characteristics necessary for our planet to exist and for life, including human life, to flourish here.
Richard Dawkins, The God Delusion (2007)
In particle physics and cosmology, we humans have had to introduce ‘best fit’ parameters (fundamental constants) to explain the universe as we see it. Slightly different values for some of the critical parameters would have led to entirely different histories of the cosmos. Why do these parameters have the values they have? According to a differently worded form of the weak version of the anthropic principle stated above: the parameters and the laws of physics can be taken as fixed; it is simply that we humans have appeared in the universe to ask such questions at a time when the conditions were just right for our life.
This version suffices to explain quite a few ‘coincidences’ related to the fact that the conditions for our evolution and existence on the planet Earth happen to be ‘just right’ for that purpose. Life as we know it exists only on planet Earth. Here is a list of favourable necessary conditions for its existence, courtesy Dawkins (2007):
- Availability of liquid water is one of the preconditions for our kind of life. Around a typical star like our Sun, there is an optimum zone (popularly called the ‘Goldilocks zone’), neither so hot that water would evaporate, nor so cold that water would freeze, such that planets orbiting in that zone can sustain liquid water. Our Earth is one such planet.
- This optimum orbital zone should be circular or nearly circular. Once again, our Earth fulfils that requirement. A highly elliptical orbit would take the planet sometimes too close to the Sun, and sometimes too far, during its cycle. That would result in periods when water either evaporates or freezes. Life needs liquid water all the time.
- The location of the planet Jupiter in our Solar system is such that it acts like a ‘massive gravitational vacuum cleaner,’ intercepting asteroids that would have been otherwise lethal to our survival.
- Planet Earth has a single relatively large Moon, which serves to stabilize its axis of rotation.
- Our Sun is not a binary star. Binary stars can have planets, but their orbits can get messed up in all sorts of ways, entailing unstable or varying conditions, inimical for life to evolve and survive.
What we have listed above are just some necessary conditions. They are by no means sufficient conditions as well. With all the above conditions available on Earth, another highly improbable set of phenomena occurred, namely the actual origin of life. This origin was a set of highly improbable (but not impossible) set of chemical events, leading to the emergence of a mechanism for heredity. This mechanism came in the form of emergence of some kind of genetic molecules like RNA. This was a highly improbable thing to happen, but our existence implies that such an event, or a sequence of events, did indeed take place. Once life had originated, Darwinian evolution of complexity through natural selection (which is not a highly improbable set of events) did the rest and here we are, discussing such questions.
Like the origin of life, another extremely improbable event (or a set of events) was the emergence of the sophisticated eukaryotic cell (on which the life of we humans is based). We invoke the anthropic principle again to say that, no matter how improbable such an event was statistically, it did indeed happen; otherwise we humans would not be here. The occurrence of all such one-off highly improbable events can be explained by the anthropic principle.
Before we discuss the cosmological or ‘strong’ version of the anthropic principle, it is helpful to recapitulate the basics of quantum theory.
4. Quantum Theory
In conventional quantum mechanics we use wave functions, ψ, to represent quantum states. The wave function plays a role somewhat similar to that of trajectories in classical mechanics. The Schrödinger equation describes how the wave function of a quantum system evolves with time. This equation predicts a smooth and deterministic time-evolution of the wave function, with no discontinuities or randomness. Just as trajectories in classical mechanics describe the evolution of a system in phase space from one time step to the next, the Schrödinger equation transforms the wave function at time t0 (corresponding to a specific point in phase space) to its value ψ(t) at another time t. The physical interpretation of the wave function is that |ψ|2is the probability of occurrence of the state of the system at a given point in phase space.
An elementary particle can exist as a superposition of two or more alternative quantum states. Suppose its energy can take two values, E1 and E2. Let u1 and u2 denote the corresponding wave functions. The quantum interpretation is that the system exists in both the states, with u12and u22 as the respective probabilities. Thus we move from a pure state to a mixture or ensemble of states. What is more, something striking happens when we humans observe such a system, say an electron, with an instrument. At the moment of observation, the wave function appears to collapse into only one of the possible alternative states, the superposition of which was described by the wave function before the event of measurement. That is, a quantum state becomes decoherent when measured or monitored by the environment. This amounts to the introduction of a discontinuity in the smooth evolution of the wave function with time.
This apparent collapse of the wave function does not follow from the mathematics of the Schrödinger equation, and was, in the early stages of the history of quantum mechanics, introduced ‘by hand’ as an additional postulate. That is, one chose to introduce the interpretation that there is a collapse of the wave function to the state actually detected by the measurement in the ‘real’ world, to the exclusion of other states represented in the original wave function. This (unsatisfactory) dualistic interpretation of quantum mechanics for dealing with the measurement problem was suggested by Bohr and Heisenberg at a conference in Copenhagen in 1927, and is known as the Copenhagen interpretation.
Another basic notion in standard quantum mechanics is that of time asymmetry. In classical mechanics we make the reasonable-looking assumption that, once we have formulated the Newtonian (or equivalent) equations of motion for a system, the future states are determined by the initial conditions. In fact, we can not only calculate the future conditions from the initial conditions, we can even calculate the initial conditions if the future conditions or states are known. This is time symmetry. In quantum mechanics, the uncertainty principle destroys the time symmetry. There can be now a one-to-many relationship between initial and final conditions. Two identical particles, in identical initial conditions, need not be observed to be in the same final conditions at a later time.
Multiple universes
Hugh Everett, during the mid-1950s, expressed total dissatisfaction with the Copenhagen interpretation: ‘The Copenhagen Interpretation is hopelessly incomplete because of its a priori reliance on classical physics … as well as a philosophic monstrosity with a “reality” concept for the macroscopic world and denial of the same for the microcosm.’ The Copenhagen interpretation implied that equations of quantum mechanics apply only to the microscopic world, and cease to be relevant in the macroscopic or ‘real’ world.
Everett offered a new interpretation, which presaged the modern ideas of quantum decoherence. Everett’s ‘many worlds’ interpretation of quantum mechanics is now taken more seriously, although not entirely in its original form. He simply let the mathematics of the quantum theory show the way for understanding logically the interface between the microscopic world and the macroscopic world. He made the observer an integral part of the system being observed, and introduced a universal wave function that applies comprehensively to the totality of the system being observed and the observer. This means that even macroscopic objects exist as quantum superpositions of all allowed quantum states. There is thus no need for the discontinuity of a wave-function collapse when a measurement is made on the microscopic quantum system in a macroscopic world.
Everett examined the question: What would things be like if no contributing quantum states to a superposition of states are banished artificially after seeing the results of an observation? He proved that the wave function of the observer would then bifurcate at each interaction of the observer with the system being observed. Suppose an electron can have two possible quantum states A and B, and its wave function is a linear superposition of these two. The evolution of the composite or universal wave function describing the electron and the observer would then contain two branches corresponding to each of the states A and B. Each branch has a copy of the observer, one which sees state A as a result of the measurement, and the other which sees state B. In accordance with the all-important principle of linear superposition in quantum mechanics, the branches do not influence each other, and each embarks on a different future (or a different ‘universe’), independent of the other. The copy of the observer in each universe is oblivious to the existence of other copies of itself and other universes, although the ‘full reality’ is that each possibility has actually happened. This reasoning can be made more abstract and general by removing the distinction between the observer and the observed, and stating that, at each interaction among the components of the composite system, the total or universal wave function would bifurcate as described above, giving rise to multiple universes or many worlds.
A modern and somewhat different version of this interpretation of quantum mechanics introduces the term quantum decoherence to rationalise how the branches become independent, and how each turns out to represent our classical or macroscopic reality. Quantum computing is now a reality, and it is based on such understanding of quantum mechanics.
Parallel histories
Richard Feynman formulated a different version of the many-worlds idea, and spoke in terms of multiple or parallel histories of the universe (rather than multiple worlds or universes). This work, done after World War II, fetched him the Nobel Prize in 1965. Feynman, whose path integrals are well known in quantum mechanics, suggested that, when a particle goes from a point P to a point Q in phase space, it does not have just a single unique trajectory or history. [It should be noted that, although we normally associate the word 'history' only with past events, history in the present context can refer to both the past and the future. A history is merely a narrative of a time sequence of event - past, present, or future.] Feynman proposed that every possible path or trajectory from P to Q in space-time is a candidate history, with an associated probability. The wave function for every such trajectory has an amplitude and a phase. The path integral for going from P to Q is obtained as the weighted vector sum, or integration over all such individual paths or histories. Feynman’s rules for assigning the amplitudes and phases for computing the sum over histories happen to be such that the effects of all except the one actually measured for a macroscopic object get cancelled out. For sub-microscopic particles, of course, the cancellation is far from complete, and there are indeed competing histories or parallel universes.
Quantum Darwinism
A different resolution to the problem of interfacing the microscopic quantum description of reality with macroscopic classical reality is offered by what has been called ‘quantum Darwinism.’ This formalism does not require the existence of an observer as a witness of what occurs in the universe. Instead, the environment is the witness. A selective witness at that, rather like natural selection in Darwin’s theory of evolution. The environment determines which quantum properties are the fittest to survive (and be observed, for example, by humans). Many copies of the fitter quantum property get created in the entire environment (‘redundancy’). When humans make a measurement, there is a much greater chance that they would all observe and measure the fittest solution of the Schrödinger equation, to the exclusion (or near exclusion) of other possible outcomes of the measurement experiment.
In a computer experiment, Blume-Kohout and Zurek (2007) demonstrated quantum Darwinism (http://www.arxiv.org/abs/0704.3615) in zero-temperature quantum Brownian motion (QBM). A harmonic oscillator system (S) is made to evolve in contact with a bath (ε) of harmonic oscillators. The question asked is: How much information about S can an observer extract from the bath ε? ε consists of subenvironments εi; i = 1, 2, 3, … Each observer has exclusive access to a fragment F consisting of m subenvironments. The so-called ‘mutual information entropy’ is calculated from the quantum mutual information between S and F.
An important result of this approach is that substantial redundancy appears in the QBM model; i.e., multiple redundant records get made in the environment. As the authors state, this redundancy accounts for the objectivity and the classicality; the environment is a witness, holding many copies of the evidence. When humans make a measurement, it is most likely that they would all interact with one of the stable recorded copies, rather than directly with the actual quantum system, and thus observe and measure the classical value, to the exclusion of other possible outcomes of the measurement experiments.
Gell-Mann’s coarse-graining interpretation of quantum mechanics
For this interpretation, let us first understand the difference between fine-grained and coarse-grained histories of the universe. Completely
fine-grained histories of the universe are histories that give as complete a description as possible of the entire universe at every moment of time. Consider a simplified universe in which elementary particles have no attributes other than positions and momenta, and in which the indistinguishability among particles of a given type is ignored. Then, one kind of fine-grained history of the simplified universe would be one in which the positions of all the particles are known at all times. Unlike classical mechanics which is deterministic, quantum mechanics is probabilistic. One might think that we can write down the probability for each possible fine-grained history. But this is not so. It turns out that the ‘interference’ terms between fine-grained histories do not usually cancel out, and we cannot assign probabilities to the fine-grained histories. One has to resort to coarse-graining to be able to assign probabilities to the histories. Murray Gell-Mann and coworkers applied this approach to a description of the quantum-mechanical histories of the universe. It was shown that the interference terms get cancelled out on coarse-graining. Thus we can work directly with wave functions, rather than having to work with wave-function amplitudes, and then there is no problem interfacing the microscopic description with the macroscopic world of measurements etc.
Gell-Mann also emphasized the point that the term ‘many worlds or universes’ should be substituted by ‘many alternative histories of the universe’, with the further proviso that the many histories are not ‘equally real’; rather they have different probabilities of occurrence.
5. The Cosmological Anthropic Principle
Some quantum cosmologists like to talk about a so-called anthropic principle that requires conditions in the universe to be compatible with the existence of human beings. A weak form of the principle states merely that the particular branch history on which we find ourselves possesses the characteristics necessary for our planet to exist and for life, including human life, to flourish here. In that form, the anthropic principle is obvious. In its strongest form, however, such a principle will supposedly apply to the dynamics of the elementary particles and the initial conditions of the universe, somehow shaping those fundamental laws so as to produce human beings. That idea seems to me so ridiculous as to merit no further discussion.Much confusion and uncalled-for debate has been engendered by the (scientifically unsound) ‘strong’ or cosmological version of the anthropic principle, which is sometimes stated as follows: Since the universe is compatible with the existence of human beings, the dynamics of the elementary particles and the initial conditions of the universe must have been such that they shaped the fundamental laws so as to produce human beings. This is clearly untenable. There are no grounds for the existence of a ‘principle’ like this. A scientifically untenable principle is no principle at all. No wonder, the Nobel laureate Gell-Mann, as quoted above, described it as ‘so ridiculous as to merit no further discussion.’
Murray Gell-Mann, The Quark and the Jaguar
The chemical elements needed for life were forged in stars, and then flung far into space through supernova explosions. This required a certain amount of time. Therefore the universe cannot be younger than the lifetime of stars. The universe cannot be too old either, because then all the stars would be ‘dead’. Thus, life can exist only when the universe has just the age that we humans measure it to be, and has just the physical constants that we measure them to be.
It has been calculated that if the laws and fundamental constants of our universe had been even slightly different from what they are, life as we know it would not have been possible. Rees (1999), in the book Just Six Numbers, listed six fundamental constants which together determine the universe as we see it. Their fine-tuned mutual values are such that even a slightly different set of these six numbers would have been inimical to our emergence and existence. Consideration of just one of these constants, namely the strength of the strong interaction (which determines the binding energies of nuclei), is enough to make the point. It is defined as that fraction of the mass of an atom of hydrogen which is released as energy when hydrogen atoms fuse to form an atom of helium. Its value is 0.007, which is just right (give or take a small acceptable range) for any known chemistry to exist, and no chemistry means no life. Our chemistry is based on reactions among the 90-odd elements. Hydrogen is the simplest among them, and the first to occur in the periodic table. All the other elements in our universe got synthesised by fusion of hydrogen atoms. This nuclear fusion depends on the strength of the strong or nuclear interaction, and also on the ability of a system to overcome the intense Coulomb repulsion between the fusing nuclei. The creation of intense temperatures is one way of overcoming the Coulomb repulsion. A small star like our Sun has a temperature high enough for the production of only helium from hydrogen. The other elements in the periodic table must have been made in the much hotter interiors of stars larger than our Sun. These big stars may explode as supernovas, sending their contents as stellar dust clouds, which eventually condense, creating new stars and planets, including our own Earth. That is how our Earth came to have the 90-odd elements so crucial to the chemistry of our life. The value 0.007 for the strong interaction determined the upper limit on the mass number of the elements we have here on Earth and elsewhere in our universe. A value of, say, 0.006, would mean that the universe would contain nothing but hydrogen, making impossible any chemistry whatsoever. And if it were too large, say 0.008, all the hydrogen would have disappeared by fusing into heavier elements. No hydrogen would mean no life as we know it; in particular there would be no water without hydrogen.
Similarly for the other finely-tuned fundamental constants of our universe. Existence of humans has become possible because the values of the fundamental constants are what they are; had they been different, we would not exist; that is how the anthropic principle (planetary or cosmological, weak or strong) should be stated. The weak version is the only valid version of the principle.
But why does the universe have these values for the fundamental constants, and not some other set of values? Different physicists and cosmologists have tried to answer this question in different ways, and the investigations go on. One possibility is that there are multiple universes, and we are in one just right for our existence. Another idea is based on string theory.
6. String Theory and the Anthropic Principle
A ‘string’ is a fundamental 1-dimensional object, postulated to replace the concept of structureless elementary particles. Different vibrational modes of a string give rise to the various elementary particles (including the graviton). String theory aims to unite quantum mechanics and the general theory of relativity, and is thus expected to be a unified ‘theory of everything.’ When this theory makes sufficient headway, the six fundamental constants identified by Rees will turn out to be inter-related, and not free to have any arbitrary values. But this still begs the question asked above: Why this particular set of fundamental constants, and not another? Hawking (1988) asked an even deeper question: ‘Even if there is only one possible unified theory, it is just a set of rules and equations. What is it that breathes fire into the equations and makes a universe for them to describe? The usual approach of science of constructing a mathematical model cannot answer the questions of why there should be a universe for the model to describe. Why does the universe go to all the bother of existing?’
Our universe is believed to have started at the big bang, shown by Hawking and Penrose in the 1970s to be a singularity point is space-time (some physicists disagree with the singularity idea). The evidence for this seems to be that the universe has been expanding (‘inflating’) ever since then. It so happens that we have no knowledge of the set of initial boundary conditions at the moment of the big bang. Moreover, as Hawking and Hertog said in 2006, things could be a little simpler ‘if one knew that the universe was set going in a particular way in either the finite or infinite past.’ Therefore Hawking and coworkers argued that it is not possible to adopt the bottom up approach to cosmology wherein one starts at the beginning of time, applies the laws of physics, calculates how the universe would evolve with time, and then just hopes that it would turn out to be something like the universe we live in. Consequently a top down approach has been advocated by them (remember, this is just a model), wherein we start with the present and work our way backwards into the past. According to Hawking and Hertog (2006), there are many possible histories (corresponding to successive unpredictable bifurcations in phase space), and the universe has lived them all. Not only that, there is also an anthropic angle to this scenario:
As mentioned above, Stephen Hawking and Roger Penrose had proved that the moment of the big bang was a singularity, i.e. a point where gravity must have been so strong as to curve space and time in an unimaginably strong way. Under such extreme conditions our present formulation of general relativity would be inadequate. A proper quantum theory of gravity is still an elusive proposition. But, as suggested by Hawking and Hertog in 2006, because of the small size of the universe at and just after the big bang, quantum effects must have been very important. The origin of the universe must have been a quantum event. This statement has several weird-looking consequences. The basic idea is to incorporate the consequences of Heisenberg’s uncertainty principle when considering the evolution of the (very small) early universe, and combine it with Feynman’s sum-over-histories approach. This means that, starting from configuration A, the early universe could go not only to B, but also to other configurations B’, B”, etc. (as permitted by the quantum-mechanical uncertainty principle), and one has to do a sum-over-histories for each of the possibilities AB, AB’, AB”, … And each such branch corresponds to a different evolution of the universe (with different cosmological and other fundamental constants), only one or a few of them corresponding to a universe in which we humans could evolve and survive. This provides a satisfactory answer to the question: ‘why does the universe have these values for the fundamental constants, and not some other set of values?’.
The statement ‘humans exist in a universe in which their existence is possible’ is practically a tautology. How can humans exist in a universe which has values of fundamental constants which are not compatible with their existence?! Stop joking, Dr. Lanza.
The other possible universes (or histories) also exist, each with a specific probability. Our observations of the world are determining the history that we see. The fact that we are there and making observations assigns to ourselves a particular history.
Let A denote the beginning of time (if there is any), and B denote now. The state of the universe at point B can be broadly specified by recognizing the important aspects of the world around us: There are three large dimensions in space, the geometry of space is almost flat, the universe is expanding, etc. The problem is that we have no way of specifying point A. So how do we perform the various sums over histories? An interesting point of the quantum mechanical sums-over-histories theory is that the answers come out right when we work with imaginary (or complex) time, rather than real time. The work of Hawking and Hertog (2006) has shown that the imaginary-time approach is crucial for understanding the origin of the universe. When the histories of the universe are added up in imaginary time, time gets transformed into space. It follows from this work that when the universe was very small, it had four spatial dimensions, and none for time. In terms of the history of the universe, it means that there is no point A, and that the universe has no definable starting point or initial boundary conditions. In this no-boundary scheme of things, we can only start from point B and work our way backwards (the top-down approach).
This approach also solves the fine-tuning problem of cosmology. Why has the universe a particular inflation history? Why does the cosmological constant (which determines the rate of inflation) have the value it has? Why did the early universe have a particular ‘fine-tuned’ initial configuration and a specific (fast) initial rate of inflation? In the no-boundary scenario there is no need to define an initial state. And there is no need for any fine tuning. What is more, the very fact of inflation, as against no inflation, follows from the theory as the most probable scenario.
String theory defines a near-infinity of multiple universes. This goes well with the anthropic-principle idea that, out of the multiple choices for the fundamental constants (including the cosmological constant) for each such universe, we live in the universe that makes our existence possible. In the language of string theory, there are multiple ‘pocket’ universes that branch off from one another, each branch having a different set of fundamental constants. Naturally, we are living in one with just the right fundamental constants for our existence.
While many physicists feel uncomfortable with this unconfirmed world view, Hawking and Hertog (2006) have pointed out that the picture of a never-ending proliferation of pocket universes is meaningful only from the point of view of an observer outside a universe, and that situation (observer outside a universe) is impossible. This means that parallel pocket universes can have no effect on an actual observer inside a particular pocket.
Hawking’s work has several other implications as well. For example, in his scheme of things the string theory ‘landscape’ is populated by the set of all possible histories. All possible versions of a universe exist in a state of quantum superposition. When we humans choose to make a measurement, a subset of histories that share the specific property measured gets selected. Our version of the history of the universe is determined by that subset of histories. No wonder the cosmological anthropic principle holds. How can any rational person use the anthropic principle to justify biocentrism?
Hawking and Hertog’s theory can be tested by experiment, although that is not going to be easy. Its invocation of Heisenberg’s uncertainty principle during the early moments of the universe, and the consequent quantum fluctuations, leads to a prediction of specific fluctuations in the cosmic microwave background, and in the early spectrum of gravitational waves. These predicted fluctuations arise because there is an uncertainty in the exact shape of the early universe, which is influenced, among other things, by other histories with similar geometries. Unprecedented precision will be required for testing these predictions. In any case, gravitation waves have not even been detected yet.
In any case, good scientists are having a serious debate about the correct interpretation of the data available about life and the universe. While this goes on, non-scientists and charlatans cannot be permitted to twist facts to satisfy the hunger of humans for the feel-good or feel-important factor. The scientific method is such that scientists feel good when they are doing good science.
7. Wolfram’s Universe
Stephen Wolfram has emphasized the role of computational irreducibility when it comes to trying to understand our universe. The notion of probability (as opposed to certainty) is inherent in our worldview if quantum theory is a valid theory. Wolfram argues that this may not be a correct worldview. He does not rule out the possibility that there really is just a single, definite, rule for our universe which, in a sense, deterministically specifies how everything in our universe happens. Things only look probabilistic because of the high degree of complexity involved, particularly regarding the very structure and connectivity of space and time. It is computational irreducibility that sometimes makes certain things look incomprehensible or probabilistic, rather than deterministic. Since we are restricted to doing the computational work within the universe, we cannot expect to ‘outrun’ the universe, and derive knowledge any faster than just by watching what the universe actually does.
Wolfram points out that there is relief from this tyranny of computational irreducibility only in the patches or islands of computational reducibility. It is in those patches that essentially all of our current physics lies. In natural science we usually have to be content with making models that are approximations. Of course, we have to try to make sure that we have managed to capture all the features that are essential for some particular purpose. But when it comes to finding an ultimate model for the universe, we must find a precise and exact representation of the universe, with no approximations. This would amount to reducing all physics to mathematics. But even if we could do that and know the ultimate rule, we are still going to be confronted with the problem of computational irreducibility. So, at some level, to know what will happen, we just have to watch and see history unfold.
8. The Nature of Consciousness
One criticism of biocentrism comes from the philosopher Daniel Dennett, who says “It looks like an opposite of a theory, because he doesn’t explain how consciousness happens at all. He’s stopping where the fun begins.”
The logic behind this criticism is obvious. Without a descriptive explanation for consciousness and how it ‘creates’ the universe, biocentrism is not useful. In essence, Lanza calls for the abandonment of modern theoretical physics and its replacement with a magical solution. Here are a few questions that one might ask of the idea:
- What is this consciousness?
- Why does this consciousness exist?
- What is the nature of the interaction between this consciousness and the universe?
- Is the problem of infinite regression applicable to consciousness itself?
- Even if Lanza’s interpretation of the anthropic principle is a valid argument against modern theoretical physics, does the biocentric model of consciousness create a bigger ontological problem than the one it attempts to solve?
“Consciousness cannot exist without a living, biological creature to embody its perceptive powers of creation.“How can consciousness create the universe if it doesn’t exist? How can the “living, biological creature” exist if the universe has not been created yet? It becomes apparent that Lanza is muddling the meaning of the word ‘consciousness.’ In one sense he equates it to subjective experience that is tied to a physical brain. In another, he assigns to consciousness a spatio-temporal logic that exists outside of physical manifestation. In this case, the above questions become: 1. What is this spatio-temporal logic?; 2. Why does this spatio-temporal logic exist? and so on…
Daniel Dennett’s criticism of biocentrism centres on Lanza’s non-explanation of the nature of consciousness. In fact, even from a biological
perspective Lanza’s conception of consciousness is unclear. For example, he consistently equates consciousness with subjective experience while stressing its independence from the objective universe (see Lanza’s quote below). This is an appeal to the widespread but erroneous intuition towards Cartesian Dualism. In this view, consciousness (subjective experience) belongs to a different plane of reality than the one on which the material universe is constructed. Lanza requires this general definition of consciousness to construct his theory of biocentrism. He uses it in the same way that Descartes used it – as a semantic tool to deconstruct reality. In fact, Lanza’s theory of biocentrism is a sophisticated non-explanation for the ‘brain in a vat’ problem that plagued philosophers for centuries. However, instead of subscribing to Cartesian Dualism, he attempts a Cartesian Monism by invoking quantum mechanics. To be exact, his view is Monistic Idealism - the idea that consciousness is everything- but the Cartesian bias is an essential element in his arguments.
In a dualistic or idealistic context, Lanza’s definition of consciousness as subjective experience may be acceptable. However, Lanza’s definition is incomplete from a scientific perspective. The truth is that there are difficulties in analysing consciousness empirically. In scientific terms, consciousness is a ‘hard problem’, meaning that its complete subjective nature places it beyond direct objective study. Lanza exploits this difficulty to deny science any understanding of consciousness.
Lanza trivializes the current debate in the scientific community about the nature of consciousness when he says:
“Neuroscientists have developed theories that might help to explain how separate pieces of information are integrated in the brain and thus succeed in elucidating how different attributes of a single perceived object-such as the shape, colour, and smell of a flower-are merged into a coherent whole. These theories reflect some of the important work that is occurring in the fields of neuroscience and psychology, but they are theories of structure and function. They tell us nothing about how the performance of these functions is accompanied by a conscious experience; and yet the difficulty in understanding consciousness lies precisely here, in this gap in our understanding of how a subjective experience emerges from a physical process.”This criticism of the lack of a scientific consensus on the nature of consciousness is empty, considering that Lanza himself proposes no actual mechanism for consciousness, but still places it at the centre of his theory of the universe.
There is no need to view consciousness as such a mystery. There are some contemporary models of consciousness that are quite explanatory, presenting promising avenues for studying how the brain works. Daniel Dennett’s Multiple Drafts Model is one. According to Dennett, there is nothing mystical about consciousness. It is an illusion created by tricks in the brain. The biological machinery behind the tricks that create the illusion of consciousness is the product of successive evolutionary processes, beginning with the development of primitive physiological reactions to external stimuli. In the context of modern humans, consciousness consists of a highly dynamic process of information exchange in the brain. Multiple sets of sensory information, memories and emotional cues are competing with each other at all times in the brain, but at any one instant only one set of these factors dominates the brain. At the next instant, another set of slightly different factors are dominant. At all instants, multiple sets of information are competing with each other for dominance. This creates the illusion of a continuous stream of thoughts and experiences, leading to the intuition that consciousness comprises the entirety of the voluntary mental function of the individual. There are other materialist models, such as Marvin Minsky’s view of the brain as an emotional machine, that provide us with ways of approaching the problem from a scientific perspective without resorting to mysticism.
Consciousness is not something that requires a restructuring of objective reality. It is a subjective illusion on one level, and the mechanistic outcome of evolutionary processes on another.
“A human being is a part of a whole, called by us ‘universe’, a part limited in time and space. He experiences himself, his thoughts and feelings as something separated from the rest… a kind of optical delusion of his consciousness.”9. Deepak Chopra Finds an Ally for Hijacking and Distorting Scientific Truths
Albert Einstein
Deepak Chopra, Lanza’s coauthor in the article, is known for making bold claims about the nature of the universe. He peddles a form of new-age Hinduism. Chopra’s ideas about a conscious universe are derived from an interpretation of Vedic teachings. He supplements this new-age Hinduism with ideas from a minority view among physicists that the Copenhagen Interpretation implies a conscious universe. This view is expounded by Amit Goswamiin his book The Self-Aware Universe. In turn, Goswami and his peers were influenced by Fritjof Capra’s book The Tao of Physics in which the author attempts to reconcile reductionist science with Eastern mystical philosophies. Much of modern quantum mysticism in the popular culture can be traced back to Capra. Chopra’s philosophy is essentially a distillation of Capra’s work combined with a popular marketing strategy to sell all kinds of pseudoscientific garbage.
Considering Chopra’s reputation in the scientific community for making absurd quack claims about every subject under the sun, one must wonder about the strange pairing between the two writers. With Lanza’s experience in biomedical research, he could not possibly be in agreement with Chopra’s brand of holistic healing and quantum mysticism. Rather, it seems likely that this is an arrangement of convenience. If you look at what drives the two men, a mutually reinforced disenchantment with Darwin’s ideas emerges as a strong motive behind the pairing. Both Chopra and Lanza are disillusioned with a certain perceived implication of Darwinian evolution on human existence – that the meaning of life is inconsequential to the universe. Evolutionary biology upholds the materialist view of modern science that consciousness is a product of purely inanimate matter assembling in highly complex states. Such a view is disillusioning to anyone who craves a more central role for the human ego in determining one’s reality. The view that human life is central to existence is found in most philosophical and religious traditions. This view is so fundamental to our nature that we can say it is an intuitive reaction to the very condition of being conscious. It has traditionally been the powerful driving force behind philosophers, poets, priests, mystics and scholars of history. Darwin dismantled the idea in one clean stroke. Therefore, Darwin became the enemy. The entire theory of biocentrism is an attempt to ingrain the idea of human destiny into popular science.
The title of Chopra and Lanza’s article is “Evolution Reigns, but Darwin Outmoded”. This may mislead you to think that the article is about new discoveries in biological evolution. On reading the article, however, it becomes apparent that the authors are not talking about biological evolution at all. It is relevant to note that not once in their article do they say how Darwin has been outmoded.
Towards the end of their article, Chopra and Lanza say:
“Darwin’s theory of evolution is an enormous over-simplification. It’s helpful if you want to connect the dots and understand the interrelatedness of life on the planet — and it’s simple enough to teach to children between recess and lunch. But it fails to capture the driving force and what’s really going on.”There is irony in dismissing the most brilliant and explanatory scientific theory in all of biology as an ‘over-simplification’, by over-simplifying it as a way to “connect the dots and understand the interrelatedness of life on the planet”. Contrast this with what Richard Dawkins said: “In 1859, Charles Darwin announced one of the greatest ideas ever to occur to a human mind: cumulative evolution by natural selection.” The irony of Chopra and Lanza’s statement is compounded by the fact that biocentrism does not address biological evolution at all! The authors are simply interested in belittling the uncomfortable implications of evolutionary theory, while not actually saying anything about the theory itself! We can safely assume that Lanza and Chopra are more concerned with the implications of Darwinian evolution on the nature of the human ego, and not on the theory of evolution by natural selection.
Interestingly, Chopra has demonstrated his dislike and ignorance of biological evolution multiple times. Here are some prize quotations from the woo-master himself (skip these if you feel an aneurysm coming):
“To say the DNA happened randomly is like saying that a hurricane could blow through a junk yard and produce a jet plane. “
“How does nature take creative leaps? In the fossil record there are repeated gaps that no “missing link” can fill. The most glaring is the leap by which inorganic molecules turned into DNA. For billions of years after the Big Bang, no other molecule replicated itself. No other molecule was remotely as complicated. No other molecule has the capacity to string billions of pieces of information that remain self-sustaining despite countless transformations into all the life forms that DNA has produced. “
“If mutations are random, why does the fossil record demonstrate so many positive mutations–those that lead to new species–and so few negative ones? Random chance should produce useless mutations thousands of times more often than positive ones. “
“Evolutionary biology is stuck with regard to simultaneous mutations. One kind of primordial skin cell, for example, mutated into scales, fur, and feathers. These are hugely different adaptations, and each is tremendously complex. How could one kind of cell take three different routs purely at random? “
“If design doesn’t imply intelligence, why are we so intelligent? The human body is composed of cells that evolved from one-celled blue-green algae, yet that algae is still around. Why did DNA pursue the path of greater and greater intelligence when it could have perfectly survived in one-celled plants and animals, as in fact it did? “
“Why do forms replicate themselves without apparent need? The helix or spiral shape found in the shell of the chambered nautilus, the centre of sunflowers, spiral galaxies, and DNA itself seems to be such a replication. It is mathematically elegant and appears to be a design that was suited for hundreds of totally unrelated functions in nature. “
“What happens when simple molecules come into contact with life? Oxygen is a simple molecule in the atmosphere, but once it enters our lungs, it becomes part of the cellular machinery, and far from wandering about randomly, it precisely joins itself with other simple molecules, and together they perform cellular tasks, such as protein-building, whose precision is millions of times greater than anything else seen in nature. If the oxygen doesn’t change physically–and it doesn’t–what invisible change causes it to acquire intelligence the instant it contacts life? “
“How can whole systems appear all at once? The leap from reptile to bird is proven by the fossil record. Yet this apparent step in evolution has many simultaneous parts. It would seem that Nature, to our embarrassment, simply struck upon a good idea, not a simple mutation. If you look at how a bird is constructed, with hollow bones, toes elongated into wing bones, feet adapted to clutching branches instead of running, etc., none of the mutations by themselves give an advantage to survival, but taken altogether, they are a brilliant creative leap. Nature takes such leaps all the time, and our attempt to reduce them to bits of a jigsaw puzzle that just happened to fall into place to form a beautifully designed picture seems faulty on the face of it. Why do we insist that we are allowed to have brilliant ideas while Nature isn’t? “
“Darwin’s iron law was that evolution is linked to survival, but it was long ago pointed out that “survival of the fittest” is a tautology. Some mutations survive, and therefore we call them fittest. Yet there is no obvious reason why the dodo, kiwi, and other flightless birds are more fit; they just survived for a while. DNA itself isn’t fit at all; unlike a molecule of iron or hydrogen, DNA will blow away into dust if left outside on a sunny day or if attacked by pathogens, x-rays, solar radiation, and mutations like cancer. The key to survival is more than fighting to see which organism is fittest. “
“Competition itself is suspect, for we see just as many examples in Nature of cooperation. Bees cooperate, obviously, to the point that when a honey bee stings an enemy, it acts to save the whole hive. At the moment of stinging, a honeybee dies. In what way is this a survival mechanism, given that the bee doesn’t survive at all? For that matter, since a mutation can only survive by breeding–”survival” is basically a simplified term for passing along gene mutations from one generation to the next-how did bees develop drones in the hive, that is, bees who cannot and never do have sex? “
“How did symbiotic cooperation develop? Certain flowers, for example, require exactly one kind of insect to pollinate them. A flower might have a very deep calyx, or throat, for example than only an insect with a tremendously long tongue can reach. Both these adaptations are very complex, and they serve no outside use. Nature was getting along very well without this symbiosis, as evident in the thousands of flowers and insects that persist without it. So how did numerous generations pass this symbiosis along if it is so specialized? “
“Finally, why are life forms beautiful? Beauty is everywhere in Nature, yet it serves no obvious purpose. Once a bird of paradise has evolved its incredibly gorgeous plumage, we can say that it is useful to attract mates. But doesn’t it also attract predators, for we simultaneously say that camouflaged creatures like the chameleon survive by not being conspicuous. In other words, exact opposites are rationalized by the same logic. This is no logic at all. Non-beautiful creatures have survived for millions of years, so have gorgeous ones. The notion that this is random seems weak on the face of it. “Now comes the kicker. All these quotes that demonstrate a complete lack of understanding of biology, let alone the theory of evolution by natural selection, are from one single article as compiled by P. Z. Myers in his blog post in 2005. Since then, Chopra has continued to spout his ignorance of evolution over and over.
Chopra’s brand of mysticism gets its claimed legitimacy from science and its virulence from discrediting science’s core principles. He continues this practice through his association with Robert Lanza. Both Chopra and Lanza seem to be disillusioned by the perceived emptiness of a non-directional evolutionary reality. Chopra has invested much time and effort in promoting the idea that consciousness in a property of the universe itself. He finds in Lanza a keen mind with an inclination towards a similar dislike for a perceived lack of anthropocentric meaning in the nature of biological life as described by Darwin’s theory of evolution by natural selection.
10. Conclusions
Let us recapitulate the main points:
(a) Space and time exist, even though they are relative and not absolute.
(b) Modern quantum theory, long after the now-discredited Copenhagen interpretation, is consistent with the idea of an objective universe that exists without a conscious observer.
(c) Lanza and Chopra misunderstand and misuse the anthropic principle.
(d) The biocentrism approach does not provide any new information about the nature of consciousness, and relies on ignoring recent advances in understanding consciousness from a scientific perspective.
(e) Both authors show thinly-veiled disdain for Darwin, while not actually addressing his science in the article. Chopra has demonstrated his utter ignorance of evolution multiple times.
Modern physics is a vast and multi-layered web that stretches over the entire deck of cards. All other natural sciences – all truths that exist in the material world- are interrelated, held together by the mathematical reality of physics. Fundamental theories in physics are supported by multiple lines of evidence from many different scientific disciplines, developed and tested over decades. Clearly, those who propose new theories that purport to redefine fundamental assumptions or paradigms in physics have their work cut out for them. Our contention is that the theory of biocentrism, if analysed properly, does not hold up to scrutiny. It is not the paradigm change that it claims to be. It is also our view that one can find much meaning, beauty and purpose in a naturalistic view of the universe, without having to resort to mystical notions of reality.
Dr. Vinod Kumar Wadhawanis a Raja Ramanna Fellow at theBhabha Atomic Research Centre, Mumbai and an Associate Editor of the journalPHASE TRANSITIONS.
No comments:
Post a Comment