Delayed access to such manna as The New York Review of Books, The New Yorker and Scientific American means that by the time I get to hold their pages in my hands, the most obviously interesting topics have already been flagged, debated and annotated to death on the web. There is a silver lining, though: Sometimes I find subtler matters of interest to blog, and these can turn out to be quite rewarding.
I think this is one such post. But you’ll have to bear with me.
Remember Rule 30? I blogged it once (OK, twice), and made a little Flash application to illustrate what it can do: Create complexity by applying a simple fixed rule (algorithm) over and over again to the individual components of an ordered system. It’s a shocking result, because it’s so unexpected and powerful: There are no shortcuts to finding out what the system will look like after n applications of the rule — there is no formula or equation we can use to describe the state of the system using just n as the input. We really do have to run the program from scratch if we want to know what it will looks like at time n.
In other words, equations are useless for predicting the state of a system if a process like Rule 30 holds sway over it. This is the big idea in Stephen Wolfram’s A New Kind of Science, and it is why Rule 30 is the poster child for that bookI blogged NKS here when it first came out.
This past week, Wolfram put the entire brick of a book online, in a free, searchable edition. Now you have absolutely no excuse anymore not to check it out.. Wolfram argues that the complexity we see in nature is best explained not by equations, but by looking for very simple processes that operate locally, using local inputs and simple transformational rules. This is the “new kind of science” he proposes.
Bear with me.
Remember Fotini Markopoulou-Kalamara? I blogged her here, when Scientific American did this profile on her a year agoMarkopoulou has a audio lecture of hers online that is actually on the verge of understandable, with hand-drawn slides. In it she gives us a taste of how a very simple algebra of spacetime can translate into notions of cause and effect, using set theory. It takes about an hour of your time, not including pauses to figure out WTF she just said.. She had been working on an emerging theory called Quantum Loop Gravity (QLG), which competes with string theory to recast the general theory of relativity (which does a great job describing gravity) in terms of quantum theory (which until now had nothing to say about gravity). This is the holy grail of physics.
What I found remarkable at the time is that in building this framework, she and her colleagues had been working from the perspective of the smallest possible units of space, looking for very simple processes that operate locally. In other words, they had been practicing what Wolfram was now calling a “new kind of science”, and the results looked encouraging.
Where am I going with this?
Quantum Loop Gravity went mainstream with the January 2004 issue of Scientific American, where it got the cover
Unfortunately, the accompanying article is only abstracted free online, but here in Stockholm, at least, the issue is still at the newstand. Alternatively, you could read this article on QLG by Smolin, or else be blown away by the accompanying video mini-interview.. The article is well worth the read — it is written by Lee Smolin, who together with Carlo Rovelli and others pretty much fathered QLG. The conceptual leap they made was to stop assuming, as the general theory of relativity does, that spacetime is smooth and continuous. Instead, they proposed that it is composed simply of nodes connected by lines, and they calculated that these nodes occupy a smallest possible unit of volume, a cubed Planck length, about 10^-99 cm^3, and that changes to this network of nodes happens in increments of a fixed smallest possible unit of time, Planck time, about 10^-43 seconds. Particles, by the way, are nothing more than patterns of these nodes “travelling” in tandem, bumping into each other, much like a gigantic game of Life. And, importantly, this network of nodes is not in anything; it is the universe.
So the universe is a giant distributed computer, running at 10^34 Gigahertz, if you will. It took a while to figure out the implications of this, but now QLGers have a bona fide soon-to-be-testable prediction: the speed of light should vary ever so slightly depending on its energy, and this satellite, scheduled for a 2006 launch, should be able to tease out the slight difference in arrival time for photons that have travelled for billions of years.
So three years from now, we may have some instant Nobel winners on our hands. Meanwhile, string theory is looking tired and unelegant, requiring the existence of many extra dimensions and particles nobody manages to find.
But the fact that time may be discrete at a fundamental level — massive though this conceptual shift would be — is only half my point. The other half point is contained in the Jan 15, 2004 edition of The New York Review of BooksThis issue also contains a wonderful short by J.M. Coetzee., in an article written by Oliver Sacks.
Sacks exhibits his usual freak show of patients with bizarre neurological disorders (Where does he get them?). This time, his patients had a problem with their visual perception, in that it sometimes slowed down enormously, so that they no longer perceived their surroundings continuously, but instead as a series of disjointed images, much like a flickering film or even a slideshow.
Their experiences were the starting point for research that is now converging on the conclusion that for all of us, visual perception is not continuous, but occurs in discrete successive states, or “snapshots”. Usually, these are updated fast enough, and fade slowly enough, for the effect to be an illusion of continuous motion, unless the brain is damaged in specific ways. For good measure, it now also appears that consciousness occurs in discrete successive states, called “perceptual moments,” that last a tenth of a second. And, here too, the mechanism by which all this happens is via a network, this time of neurons, all acting by applying rules to local stimuli, such as surrounding neurons. It’s a “new kind of science” yet again.
That’s my other half point, then: I quite simply find it remarkable that the mind “samples” its sensory inputs, and derives conscious states based on them, at discrete time intervals.
Taken together, it would appear that both fundamental physics and neuroscience are going to nearly simultaneously jettison the notion of a continuous flow of time in favor of discrete increments. Soon, it may be the new received wisdom that not only does the universe update itself at discrete intervals, we update our perception of the universe at discrete intervals.