Delayed access to such manna as The New York Review of Books, The New Yorker and Scientific American means that by the time I get to hold their pages in my hands, the most obviously interesting topics have already been flagged, debated and annotated to death on the web. There is a silver lining, though: Sometimes I find subtler matters of interest to blog, and these can turn out to be quite rewarding.
I think this is one such post. But you’ll have to bear with me.
Remember Rule 30? I blogged it once (OK, twice), and made a little Flash application to illustrate what it can do: Create complexity by applying a simple fixed rule (algorithm) over and over again to the individual components of an ordered system. It’s a shocking result, because it’s so unexpected and powerful: There are no shortcuts to finding out what the system will look like after n applications of the rule — there is no formula or equation we can use to describe the state of the system using just n as the input. We really do have to run the program from scratch if we want to know what it will looks like at time n.
In other words, equations are useless for predicting the state of a system if a process like Rule 30 holds sway over it. This is the big idea in Stephen Wolfram’s A New Kind of Science, and it is why Rule 30 is the poster child for that bookI blogged NKS here when it first came out.
This past week, Wolfram put the entire brick of a book online, in a free, searchable edition. Now you have absolutely no excuse anymore not to check it out.. Wolfram argues that the complexity we see in nature is best explained not by equations, but by looking for very simple processes that operate locally, using local inputs and simple transformational rules. This is the “new kind of science” he proposes.
Bear with me.
Remember Fotini Markopoulou-Kalamara? I blogged her here, when Scientific American did this profile on her a year agoMarkopoulou has a audio lecture of hers online that is actually on the verge of understandable, with hand-drawn slides. In it she gives us a taste of how a very simple algebra of spacetime can translate into notions of cause and effect, using set theory. It takes about an hour of your time, not including pauses to figure out WTF she just said.. She had been working on an emerging theory called Quantum Loop Gravity (QLG), which competes with string theory to recast the general theory of relativity (which does a great job describing gravity) in terms of quantum theory (which until now had nothing to say about gravity). This is the holy grail of physics.
What I found remarkable at the time is that in building this framework, she and her colleagues had been working from the perspective of the smallest possible units of space, looking for very simple processes that operate locally. In other words, they had been practicing what Wolfram was now calling a “new kind of science”, and the results looked encouraging.
Where am I going with this?
Quantum Loop Gravity went mainstream with the January 2004 issue of Scientific American, where it got the cover
Unfortunately, the accompanying article is only abstracted free online, but here in Stockholm, at least, the issue is still at the newstand. Alternatively, you could read this article on QLG by Smolin, or else be blown away by the accompanying video mini-interview.. The article is well worth the read — it is written by Lee Smolin, who together with Carlo Rovelli and others pretty much fathered QLG. The conceptual leap they made was to stop assuming, as the general theory of relativity does, that spacetime is smooth and continuous. Instead, they proposed that it is composed simply of nodes connected by lines, and they calculated that these nodes occupy a smallest possible unit of volume, a cubed Planck length, about 10^-99 cm^3, and that changes to this network of nodes happens in increments of a fixed smallest possible unit of time, Planck time, about 10^-43 seconds. Particles, by the way, are nothing more than patterns of these nodes “travelling” in tandem, bumping into each other, much like a gigantic game of Life. And, importantly, this network of nodes is not in anything; it is the universe.
So the universe is a giant distributed computer, running at 10^34 Gigahertz, if you will. It took a while to figure out the implications of this, but now QLGers have a bona fide soon-to-be-testable prediction: the speed of light should vary ever so slightly depending on its energy, and this satellite, scheduled for a 2006 launch, should be able to tease out the slight difference in arrival time for photons that have travelled for billions of years.
So three years from now, we may have some instant Nobel winners on our hands. Meanwhile, string theory is looking tired and unelegant, requiring the existence of many extra dimensions and particles nobody manages to find.
But the fact that time may be discrete at a fundamental level — massive though this conceptual shift would be — is only half my point. The other half point is contained in the Jan 15, 2004 edition of The New York Review of BooksThis issue also contains a wonderful short by J.M. Coetzee., in an article written by Oliver Sacks.
Sacks exhibits his usual freak show of patients with bizarre neurological disorders (Where does he get them?). This time, his patients had a problem with their visual perception, in that it sometimes slowed down enormously, so that they no longer perceived their surroundings continuously, but instead as a series of disjointed images, much like a flickering film or even a slideshow.
Their experiences were the starting point for research that is now converging on the conclusion that for all of us, visual perception is not continuous, but occurs in discrete successive states, or “snapshots”. Usually, these are updated fast enough, and fade slowly enough, for the effect to be an illusion of continuous motion, unless the brain is damaged in specific ways. For good measure, it now also appears that consciousness occurs in discrete successive states, called “perceptual moments,” that last a tenth of a second. And, here too, the mechanism by which all this happens is via a network, this time of neurons, all acting by applying rules to local stimuli, such as surrounding neurons. It’s a “new kind of science” yet again.
That’s my other half point, then: I quite simply find it remarkable that the mind “samples” its sensory inputs, and derives conscious states based on them, at discrete time intervals.
Taken together, it would appear that both fundamental physics and neuroscience are going to nearly simultaneously jettison the notion of a continuous flow of time in favor of discrete increments. Soon, it may be the new received wisdom that not only does the universe update itself at discrete intervals, we update our perception of the universe at discrete intervals.
sure.
This is fascinating stuff, Stefan! I just bought the SA January issue the other day, but I haven’t read the article on QLG yet, but definititely will now.
Continuity is our necessary illusion; who wants to stutter-strobe through life? But reality is grainy, and it’s good to know.
And I think LQG cuts the strings from M theory.
Interesting post, very much in line with the time (sorry) of interdiciplinarity.
Not being in the field, I cannot assess the value of an article I stumbled over in a recent issue of Nature, but it might interest you. In the November 13 issue of Nature physicist Mitrofanov (Nature 426, p139) suggests a way of setting constraints on the LQG theory by looking at the polarisation of gamma-ray bursts. The example he brings up seems in his words to disprove the theory.
“I conclude that, should the polarization measurement be confirmed, quantum gravity effects act with a power that is greater than linearity, or that loop quantum gravity is not viable.” He wants to see more measurements performed to solve the dispute.
In the January 22 issue a letter to Nature (Nature 427, p287) claims that also X-ray measurements disprove LQG. That is as far as the communication has reached, but more will surely follow.
The first article leaves a window open for a “quadratic space-time”. Whatever that means…
Yes but the Finderbinger Principle states very clearly that the covariance of blue stylophornae with discreet moments of metacaca leads only to fractual dysperplexiosis. Clearly, Stefan, you’re wrong.
This is why we humans invented music.
How the hell were we going to keep all together . . .?
Bach knew all about this the crafty old kraut. . .
🙂
Sulfur (Plank Length), Mercury (Light Speed), Salt (Gravity vs. Radiation [c^2/G]^1/2.
Mix together these three to make one androgyny. Marry two androgynes, make them stop, and see if they spin (hbar) or not. Put them on a lattice two by two and see if they do not make you and me. Quantum we are and quantum we be, all is made up from just these three.
How very odd. ten or fifteen years ago I was laying in the tub reading kierkegaard when it occured to me that
things would make a great deal more sense if time were discrete. The first thing that I thought was that it would
take care of Zeno’s paradoxes if the arrow finally reached a point at which it had no choice but to hit the target
or else go out of existence, either/or (kierkegaard again). The hare finally reaches a point at which he must reach
an equal place with the tortoise or instantly reduce speed. I say “instantly” because it seemed that the obvious name for the
elemental unit of time would be an “instant’. There would have to be a very large number of instants in a second and
I remember thinking that it would be convenient if it tuned out to be Avogadro’s number which was the only very (very) big
number I knew. But then I thought it would be possible to calculate that number (or at least a couple of possible multiples)
from the suppostition that a photon, or light quanta or structural arrangement of light information events (or whatever)
would have to be going out of existence in one instant with “potential” and appearing again in the proximate instant or
subsequent universe some distance away and that this could be a “mechanical” model for the propagation of light through
transparent materials without resorting necessarily to a “wave” model. Which is to say, that after an initial shearing off of
misaligned photon events at the surface (reflection), the subsequent photon events would continue their intermittent adventure in the
interstices between the relative atomic structural events of glass, water, oxygen or whatever other “transparent” medium,
unimpeded since the light event incidence was a number evenly divisible in each of these numerically different structures and
that by figuring the possible intervals that might evenly divide into all the structures of all the transparent materials we might
discover a number (or multiple thereof) for light (existence) events/sec.
But then my brain got tired.
Later, while laying in another tub, I began thinking about the nature of an individual “instant”. Since we assume that the
instant is the essential irreducible unit of time (even within the time/space flexure) it seems that within
the universe-instant there cannot be any further reduction of time. Time then, as “duration” cannot exist within the
universe-instant. We are then generating complete (down to the tiniest possible division)”frozen” universes at the rate of 6×10 to the 23rd
power per second (Avogadro) or one every 10 to the -43 seconds (Planck). That’s a lot of complete universes to be completely generated
and discarded every second. Where do they all go? Since they are frozen, they are obviously not going anywhere. Forget for now
the question of where they might have came from in the first place.
I was at a poker party a few years ago with a number of acquaintances from the space sciences and art departments and after a few
too many drinks announced my lack of enthusiasm for string theory. The last science poker party I was invited to I’m afraid.
Scientists were as eager to hear artists discoursing on scientific matters as artists are to hear scientists weighing in on art.
So I’m glad to hear that (at least as of the date of this initial post) that that theory is considered to be looking “tired and inelegant”.
One evening relatively recently I was considering this matter of the generation of frozen universes and realized that the whole problem could be resolved if we concede that all possible universes are already in existence. Then we have only to multiply
the number of subatomic events in the universe (and any possible combination thereof) times 10 to the 43rd (exponentially) to know how many
possible (already existing) universes we could possibly be occupying in the next second (beginning with this one as the initial or starting unit, of course
and neglecting for the moment that this one is only one of that same very large number of possible existences belonging to the last second
before this and so on to the beginning of time, whenever that was.
An interesting possibility attendant on this theory is that the sucession of universes that we occupy in our individual existence may not be
determined entirely by the collosal force of particles barging from world to world by the force of their potential but also by our individual will.
That the succession of universes in which I decided to scratch my nose diverted from the succession of universes in which I decided
not to by virtue of my decision but that all those universes do exist including the ones in which I won the lottery last week and the
the others in which I died twenty years ago. What happens to “determinism” if all outcomes already exist but my complete universe has to spin
on it’s axis to follow me when I decide whether or not I will cross the street.
My brain is hurting again.