Previous month:
August 2011
Next month:
October 2011

Why Only Now? (by Robert Perry)

[Thanks to Robert Perry for posing this question - RM]

Recently, I've been watching some videos on YouTube on children's apparent memories of past lives. I like this one, on the work of Jim Tucker, author of Life Before Life (an excellent book), who has carried on the groundbreaking work of Ian Stevenson on children's past life memories at the University of Virginia. Also this one, a 1992 documentary on Stevenson's work. And finally, this one, about an American boy who has memories of being a World War II fighter pilot.

As I watch these, the question that comes up for me is: Why hasn't this phenomenon been known in the West for centuries? It's clear that children in the West have these memories. They aren't that uncommon. One of the videos offers an estimate of one in every 500 children. Indeed, the daughter of a friend of mine had apparent past-life memories. Presumably, these things have been happening forever. So why did it take one man, Ian Stevenson, to bring this phenomenon to light in 1960?

The same question arises about near-death experiences. Why didn't they come into public and professional awareness before the 1970s? Presumably, they have also been happening forever. I asked Dr. Jeffrey Long, author of Evidence of the Afterlife, what percentage of NDEs would have happened in the past, without benefit of medical intervention. His "wild guess" is that "around half of all NDEs happened as a result of modern medical intervention." Given that a 1992 Gallup poll estimated that 5% of Americans have had an NDE, this would give us a rough figure of one in 40 people in pre-modern times, which is still quite an impressive frequency. So why wasn't anyone talking about them?

And if you think that we've run out of such discoveries, there is the case of Raymond Moody's shared death experiences, which he has just written about in his 2010 book Glimpses of Eternity. These are where people in the room with a dying person seem to experientially share in that person's transition. This can include apparently passing through the stages of that person's death process-leaving the body, having a life review, passing through a tunnel, entering a celestial landscape-with the dying person. While it's easy to see these as a sub-category of near-death experiences-shared near-death experiences-they don't actually fit, since no one is near-death. One person is healthy and the other person actually dies. It's a new phenomenon, one that is quite impressive, and according to Moody, also quite prevalent. Yet it too has managed to fly under the radar until very recently.

I would also include in this the phenomenon I documented in my 2009 book Signs: A New Approach to Coincidence, Synchronicity, Guidance, Life Purpose, and God's Plan. This is what I call CMPEs (Conjunctions of Meaningfully Parallel Events) - extreme synchronicities in which two events happen to occur close together in time and share a long list of parallels, with the story told by these parallels providing commentary on a relevant situation in the person's life. We've just finished a pilot study which will soon be published in Psychiatric Annals, which documents the occurrence of CMPEs in the lives of the study's participants. This bolsters what I have seen evidence of for a long time, that CMPEs do happen to people all over, even if not to everyone. Yet you will be hard-pressed to find examples of this phenomenon in the literature on coincidence and synchronicity.

Why, you have to wonder, did these phenomena - and I'm sure we could cite many others - go undocumented for so long? In the case of NDEs at least, I don't think the answer is terribly mysterious. It is common to hear NDErs say that they were afraid to tell anyone what they experienced, or that they tried and soon clammed up due to harsh or dismissive reactions. Steve Volk's Fringe-ology tells of how Elisabeth Kubler-Ross came very close to breaking the story of near-death experiences several years before Raymond Moody did, in a planned (and already written) final chapter to her now-classic book On Death and Dying. Yet she chose not to, afraid that it would kill the chances of her book getting published. Volk puts it more strongly: "Her entire life's work would have been dismissed" (p. 32).

The fact is that our culture is uncomfortable with the paranormal. In centuries past, that discomfort, I am sure, came mostly from the religious establishment. Now, I think it comes largely from the scientific establishment.

It makes you wonder what would happen without that stigma there. How many more such phenomena would come to light? And what might be the benefits of widespread and well-funded research on all such phenomena? Analogous to the scandal of sexual abuse by priests, we might find that we have been sitting on something of far larger proportions than anyone has suspected, driven underground by our collective unwillingness to face it. We might have to revise our terminology to reflect the fact that the paranormal is, in fact, normal. And more than that, we might have to revise our whole picture of reality.


The Man With the Hole in His Head

One of psi-sceptics' most popular arguments that anecdotal evidence can't be relied on. If you agree with that, you can ignore much of the case for psi - the whole human experience bit. With that out of the way, the experimental data can be waved away on the grounds of methodological flaws and wishful thinking.

I've been reading up on neuroscience recently, and started to notice how often the Phineas Gage story crops up. This is the nineteenth century railway worker who miraculously survived an explosion in 1848 that sent an iron bar 43 inches long and more than an inch in diameter right through his skull. Although Gage suffered massive damage to his frontal lobes, he remained conscious and eventually recovered, still able to function normally in most respects (although it did for him in the end - he died 11 years later). However he underwent a major personality change - having been a solid, dependable sort he now became roguish and disreputable, given to drinking and swearing, to the extent that his friends no longer knew him as the man he had been.

The story is told to demonstrate the dependence of the personality on the brain, and, more specifically, the frontal lobe as the seat of emotion. It's a colourful piece of evidence given in support of the orthodox view that the mind is what the brain does. If the structure of the brain is compromised, then so too will the personality be.

The case is big in popular culture - apparently there are rock bands named after him. It's also much referred to in academic books about cognitive psychology and neuroscience: I did a quick search on Questia and came up with 122 mentions. I can't tell in detail what each mention there consists of, but from the excerpts the majority seem to raise it as demonstrating the dependence of personality on the brain. And it continues to be influential - for instance it's a key piece of evidence in Damasio's controversial recent book Descartes' Error, which proposes that rationality is largely guided by emotions.

But how true is the story? According to author and psychologist Malcolm Macmillan, who did some sleuthing, the before-and-after contrast has been greatly exaggerated.

The main testimony comes from Dr John Harlow, the physician who attended Gage an hour after the accident and more or less put him back together. In 1868, eight years after his patient's death, he wrote:

The equilibrium or balance, so to speak, between his intellectual faculty and animal propensities, seems to have been destroyed. He is fitful, irreverent, indulging at times in the grossest profanity (which was not previously his custom), manifesting but little deference for his fellows, impatient of restraint or advice when it conflicts with his desires, at times pertinaciously obstinate, yet capricious and vacillating, devising many plans of future operation, which are no sooner arranged that they are abandoned in turn for others. His mind was radically changed, so decidedly that his friends and acquaintances said that he was 'no longer Gage.

Yet Harlow said little about any of this when he first publicly talked about the case, when Gage was still alive. In 1850, two years after the accident, a Harvard professor of surgery stated that he was "completely recovered in body and mind", making no mention of any personality change.

In subsequent accounts by other writers Harlow's later testimony was embellished. Gage was now said to have become a drunkard and a boastful exhibitionist, as well as suffering an absolute lack of foresight - all unmentioned by Harlow. In fact most of what has been said about Gage subsequent to his physical recovery, Macmillan says bluntly, is "fable".

Coincidentally, I've been reading Marilynne Robinson's excellent Absence of Mind , an attack on the view of humanity represented in what she calls the "parascientific literature" of Dawkins, Wilson, Pinker, Dennett, etc. (She's actually a novelist - not one I've read, but after this I'll certainly be checking out her fiction as well). On the subject of Gage she asks whether it is really so remarkable that a man who has had a crowbar pass through his brain should not start to act in ways that other people find less than reasonable.

Are we really to believe that Gage was not in pain during the years until his death? How did that terrible exit wound in his skull resolve? No conclusion can be drawn, except that in 1848 a man reacted to severe physical trauma more or less as a man living in 2009 might be expected to do...

(Actually comic writer Rich Hall makes pretty much the same point, in a lol way, in a sketch in Things Snowball. I'd quote from it, but think I threw the book out because I kept reading it when I was supposed to be working.)

As for the attention the story gets from neuroscience, Robinson says, "It's as if there were a Mr. Hyde in us all that would emerge spluttering expletives if our frontal lobes weren't there to restrain him."

Nicely put.

Anecdotal evidence - that's to say, reported human experiences - are absolutely valid in scientific discourse. Surely most scientists accept this - psychology and medical science in particular wouldn't get far without it: it's just in anti-psi writing that it's so suspect. What matters is that a story be properly validated. That's not the case here, and it's interesting to see such a key element of the materialist worldview being illustrated by a story with such slender foundations.


Book Review: Free Radicals, by Michael Brooks

Free radicals3 I've been camping in the Welsh hills, to the gentle sound of sheep baa-ing and rain pattering on the flysheet. Plenty of opportunity to catch up on some reading. One book I particularly enjoyed was Free Radicals: The Secret Anarchy of Science - a topic that anyone who knows about psi-research should find interesting.

Brooks is a science writer whose previous book, 13 Things That Don't Make Sense, argued that a lot of what is taken for granted in cosmology and biology is open to doubt. (He raised eyebrows by including homeopathy - but that's another story). This new book aims to dispel the public myth that science is an orderly and polite business. It may be so in China, which runs to Confucian principles of harmony, but that could be one reason why groundbreaking discoveries tend not to come from China. It's the adversarial system, that first emerged in ancient Greece, that produces the best ideas.

The book is entertaining, covers a lot of ground, and, speaking for myself, added to my knowledge of contemporary science. It's also a rather extraordinary story. Looked at closely, scientific skullduggery is not a pretty sight. Brooks quotes Carl Sagan:

Anyone who witnesses the advance of science first-hand sees an intensely personal undertaking. A few saintly personalities stand out amidst a roiling sea of jealousies, ambition, backbiting, suppression of dissent, and absurd conceits. In some fields, highly productive fields, such behavior is almost the norm.

Take the case of Arthur Eddington and Chandrasekhar Subrahmanyan. As a young physics graduate, Chandra was the first to realise that the heaviest stars would eventually 'disappear', collapsing into black holes under the immense pressure of gravity. This flash of insight came while he was making the sea voyage to England to study at Cambridge, and five years later he revealed his theory at the Royal Astronomical Society. In this he was helped by Eddington, then the grand old man of British astrononomy. But Eddington was setting him up. Immediately after Chandra's talk he stood up and ridiculed the idea that a star could disappear, calling it "stellar buffoonery", and adding, 'I think there should be a law of Nature to prevent a star behaving in this absurd way'.

Eddington was so respected that, since he thought Chandra's idea was rubbish, so did everyone else. At least they did in public: some RAS members privately told Chandra they thought he had a case, but lacked the gumption to dissent openly.

Why did Eddington behave like this? One possible reason is that Chandra's maths interfered with his own attempts to discover a Grand Unified Theory. Chandra himself thought it was racism, pure and simple, and in the context of the times that does seem likely. For Eddington, Chandra was a jumped-up darkie from the colonies, not one of us. Years later he got a Nobel prize for the discovery, but by that time had gone to work in the US, and avoided sticking his neck out ever again.

Free Radicals describes a lot of this sort of thing. In 1956 three men were awarded the Nobel prize in physics for the invention of the transistor. In fact the achievement belonged to two of them, Walter Brattain and John Bardeen. The third, William Shockley, was their boss. When Shockley realised that his underlings had pipped him to the post he used his authority to redirect all the lab's resources to develop his rival device. But Brattain and Bardeen still managed to write the substance of the paper, later described as 'ageless classics', while Shockley merely tacked on a 'forgettable' supplement. He then furiously lobbied his superiors to ensure that he, Shockley, would get most of the glory, fielding the press's questions and ensuring that no pictures of the pair were taken without him in the frame.

A forceful personality can sometimes push through an idea even if proof is lacking. That's the case with the discovery of the 'prion' by Stanley Prusiner in the course of research into Creutzfeld-Jakob disease (CJD) and 'scrapie' that affects sheep and goats. The infectious material didn't seem to be a virus or a bacterium, the two known agents of infection. So what could it be? A mathmetician, unconstrained by biological limitations, suggested it might be a protein, disregarding the fact that a protein can't reproduce. Prusiner jumped on this and proposed a third agent of infection, a self-replicating protein which he dubbed the prion. It's never been experimentally proven, and to this day no one apart from Prusiner can be sure it exists. But Prusiner's loud insistence, backed up by clever rhetorical tricks, has simply worn down the opposition.

I was also struck by the case of Lynn Margulis, Carl Sagan's first wife. She proposed the idea now known as endosymbiosis that genetic mutations were caused not by environmental factors, the orthodox view, but by two or more organisms co-operating together for mutual advantage. Endosymbiosis was eventually found to be richly supported by the fossil record and has become the new orthodoxy, taught in universities. But Margulis had to fight dirty to get it accepted, playing fast and loose with the peer-review system and creating a lot of hostility in the process.

Margulis's willingness to use her position as a member of the National Academy of Sciences to get round the peer-review system helped other scientists to publish seemingly crazy theories. One such tackles the odd fact of caterpillars turning into butterflies, and other similar transformations of creatures from a larval stage to a completely different form. On the face of it, caterpillars and butterflies do look like different species, and according to this idea, that's exactly what they are. The hybridization would have come about at some point in the far distant past, when the sperm of one species accidentally fertilized the eggs of another. Biologists aren't keen on the idea, and the theory may never gain acceptance. But the point is, if the rules weren't sometimes broken, radical new ideas like this might never see the light of day.

Against this, Margulis also champions the notion that the HIV virus does not cause AIDS, for which there is little or no evidence. One should be careful, Brooks warns, before assuming that everything a brilliant scientist thinks of is likely to be true. But then again, perhaps we should also be cautious about such cautions. In this context Brooks briefly mentions Brian Josephson, who won a Nobel prize for his insights into the properties of superconductors, but whose 'current ravings about the plausibility of extra-sensory perception seem less well thought through'.

Brooks talks about non-rational insight as a source of scientific discoveries. Einstein, according to his biographer, made his profound discoveries 'in the manner of a mystic'. For him, working everything out logically, by deduction, was 'far beyond the capacity of human thinking'. Brooks also pursues the possible benefit to science of psychedelic visions, much prized by Apple's Steve Jobs and other ground-breaking computer geeks, apparently. Francis Crick was said to have been fascinated by the effects of LSD, although, Brooks rather sadly concedes, there is no evidence it helped him towards the ground-breaking discovery of DNA. He goes on to relate famous examples of discoveries based on dreams and visions, such as the one that provided the blueprint for Nikola Tesla to construct the self-starting alternating current motor.

As for the rough-and-tumble and rule-breaking, Brooks thinks that's essential to a healthy pursuit of scientific discovery. He approves of Crick's response to complaints about his appalling treatment of fellow researcher Rosalind Franklin, whose data he and Watson liberally helped themselves to without acknowledging her contribution. Franklin didn't have what it takes, Cricks sniffed - too cautious, too determined to be scientifically sound and avoid short cuts.

Science changed, Brooks argues, after World War II, when large numbers of unimaginative drone researchers entered the field. Specialisation has become a curse, with people pursuing smaller and smaller concerns that are of little interest and relevance. The peer-review system, widely considered to be a bedrock of science and a reason for its effectiveness, is really a drag, he considers. It was brought in to help manage the sheer quantity of articles submitted for publication, not all of which can be accommodated. But reviewers can be tempted to delay acceptance of an article, if it makes theirs redundant, or they don't like the theory. Considering how easily affronted scientists can be when an orthodoxy is challenged the system seems positively designed to stifle innovation.

I have heard researchers moan, for instance, about a reviewer who couldn't find flaws in their work, but told the journal editor that the work should be published only if accompanied by this disclaimer: 'The most plausible explanation of these results is that they are somehow wrong'.

As we know, this is a common fate of psi-research papers that are proposed for publication in mainstream publications.

So what's the lesson here? In a general sense, Free Radicals hammers the point that hostility to radical new ideas is absolutely normal. The road to Stockholm is lined with jeering scientists, as Brooks puts it. So the 'intellectual dishonesty' psi researchers complain of - often with justification - is only to be expected. However outrageous it seems, that's science.

But if radical new ideas need forceful personalities to push them through, how does parapsychology score in that regard? Who, in this field, could be characterised as possessing the lust for glory and sheer bloody-minded egotism to make their ideas accepted?

I don't think that describes any of the first psychic researchers, like Myers, Sidgwick, Gurney - however dedicated and effective they undoubtedly were. It might apply to Joseph Rhine. In the modern day, scientists like Dean Radin and Rupert Sheldrake are certainly persistent, and face down their critics robustly, but I don't see them engaging in skullduggery to advance their agendas (of course they would say they don't have to). Some psi-researchers, in the process of trying to win mainstream acceptance, seem almost self-effacing, bending backwards to be conciliatory to their critics. (I'm thinking for instance of the late Bob Morris and John Beloff at Edinburgh's Koestler Parapsychology Unit.)

Charles Honorton perhaps comes close, in terms of force of personality. He put the ganzfeld procedure on the map. But the only person I can think of who really matches the profile is Harry Price, the wealthy British businessman-turned- researcher of the inter-war period. Price was hugely ambitious and egotistical, desperate for big cases that would bring him public glory and willing to indulge in skullduggery to advance his career. These days he's viewed in parapsychological circles as a slightly risible figure, but perhaps a nascent discipline struggling to win acceptance needs peoople with his sort of ambition and chutzpah.

The professional sceptics, now - another matter altogether. Plenty of skullduggery there.