Saturday, May 30, 2009
Houlie’s Choice
We were at the county fair, walking along the stables and schmoozing with the horses and their owners. Heather’s always had that girl/horsie thing going; I don’t spend too much time thinking about that.
In this particular fairground, the equestrian stadium is a modest oval track with a surrounding fence and a set of aluminum bleachers at one end — nothing more than you’d see at your average youth baseball field. The stables, a series of outward-facing paddocks with solid wooden gates about chest high, stand a couple hundred yards away, at a right angle to the bleachers.
So as we stand there, checking out the Persons of Hoof, a double-plus-ungood commotion arises from the overflow crowd standing around the bleachers. I take a step in that direction, and a hand grabs my arm — Heather, reminding me of Rule Number One in SAR, EMS, what have you: “scene safety,” making sure the problem that created your N patients doesn’t make you into patient number N+1, and thus worse than useless.
The crowd parts, and we can see a mare, trailing a sulky in the worst possible way — namely, not on its wheels but bouncing all the hell around — run around the far side of the bleachers. She caroms off a tree — I was sure the poor thing would go down then and there, all her limbs mangled, but fortunately no — then, pinball-like, bounces off another tree — this one finally stripping the ruined sulky from the harness — and takes off into the open.
Straight toward us.
We stand there for a moment, our back to the stable gate and with no place to go that seems any safer than where we are.
Now’s the point where I tell you What We Didn’t Know: the panicked horse was, in fact, running from her own sulky. Never having been cart-broken, she had nevertheless been harnessed by a young equestrian scheduled to compete in the sulky competition but whose regular horse was for some reason not able to compete. Attached to — chased by — a Horse Eating Thing that she couldn’t outrun, she went nuts. And now she was running for the safest place she knew in that fairground: her paddock.
Directly behind us.
The horse comes right at us; I dive right, Heather dives left. My direction was better. The mare pulls up just short of hitting the stable gate — if we’d just opened the damned thing we would have been fine, 20/20 hindsight — but not short of Heather, who she body blocks into the gate, and who dribbles a few times between solid wood and a half a ton of Frenchman’s sandwich before losing enough momentum to actually fall.
I’m looking at my wife through the horse’s stamping hooves, aching to get to her but physically unable, for about a second or so — it seemed a lot longer — before the horse runs off to my left. I close the 10 feet to Heather, drop to one knee, and as I begin the trauma assessment she says:
“What took you so long?”
I told her I’d come as fast as I could; but a better question would have been why I ducked right when she ducked left. There are thousands of little moments like that in a complex organism’s life, when it has no time to think a situation through and just acts first, thinks second. When it works out, we call it instinct; when it doesn’t, we call it Hate to Be You.
Of course, there’s a determinist school that says there is no choice: that every critter’s behavior is essentially the product of a vast equation that factors in all genetic predisposition and all life experience to create merely the illusion of free will. Heather tended left when I tended right because of a gigantic number of factors that forced us, in extremis, to do just that. A lot of neurobiologists, who are successfully dissecting surprisingly complex decisions into their neural components, are thinking that way these days.
Well, Martin Heisenberg [1] says their mamas wear Army boots — or the collegial, respectful, academic version thereof — in a thought-provoking essay in Nature that rescues free will, though maybe not exactly in a way completely agreeable to Judeo-Christian-Islamic belief.
Heisenberg’s argument is that chemical chaos and his dad’s indeterminacy principle between them more than rescue the concept. Randomly generated action has been seen in organisms a simple as fruit flies to bacteria; in Heisenberg’s own words:
“… my lab has demonstrated that fruit flies, in situations they have never encountered, can … solve problems that no individual fly in the evolutionary history of the species has solved before.”
His crew can even observe flies improvising, much as corvid birds and many other species do.
Ironically enough, while the idea poses problems for neurobiological determinism, it pretty much underlies behaviorist theory (my own beef with which being elsewhere, in the absolute behavioral flexibility inferred by the purest forms of that school of thought, which breaks down big time in the real world). The whole idea behind operant conditioning is that organisms solve problems by generating random behavior, and stick with those behaviors that elicit a reward. ’Course, their formulation isn’t exactly bullish on free will, either ...
Still, while Heisenberg argues persuasively that what we know about chemical randomness rescues the idea of free will, he makes the point that nothing in this formulation requires it to be conscious, which is not the usual way we think about it. But he also says an interesting thing that many religious thinkers may have missed [2]: consciousness can act upon free will, make the choice wiser, but there’s no requirement that the initial impetus behind the choice be conscious. To put it another way, which perhaps takes it farther than the author intended, maybe free will isn’t about the conscious choice: it’s about an urge to act that may not be in consonance with our standards of right and wrong. Maybe consciousness’ and morality’s roles have more to do with editing free will than creating it [3].
Of course, it’s interesting whether some of the theological arguments over free will might not only be recast in our current understanding of the biology, but also, paradoxically, grow in relevance somewhat. Whatever you think of the ultimate question of religious belief, the folks who did some of this thinking were nobody’s dummies, and understood the problems posed by the concept of free will in a predetermined universe.
Also interesting is the fact that the classic Roman Catholic formulation of free will reconciles it with the state of grace — God’s omniscience and sole ability to save souls pretty much meaning that the choices we make are pre-writ, and so how free can they be? — by positing a creator who exists outside time. So Catholics opted for an essentially relativistic basis for free will, while biologists may eventually push us toward quantum mechanics.
It turned out that in another article, in the previous week in Science — an otherwise good perspectives piece on human volition based on brain scans — Patrick Haggard says something nonsensical:
“Every day we make actions that seem to depend on our ‘free will’ rather than on any obvious external stimulus. This capacity not only differentiates humans from other animals, but also gives us the clear sense of controlling our bodies and lives.”
Not to pick on the guy, but how the hell does he know that humans have this faculty while animals don’t? The brain structures that the scans show produce the actions and conscious urges associated with free will — the motor cortex and parietal cortex, respectively — exist in animals as well as us. Who’s done that experiment?
It speaks to something that’s bugged me about scientific discussions of animal intelligence for a while: we seem to have exchanged a chauvinistic, unreflective anthropomorphic view of animals with a chauvinistic, unreflective Cartesian view [4].
Nor is what’s coming out of mathematical, physical, and biological research necessarily kind to the reductionist ideal that every organism — everything — is a linear product of its parts that can be understood by disassembling them and learning how they individually work. Too many systems, ranging from planetary dynamics to brain function to weather patterns, seem to proceed in a complicated, nonlinear way that makes them essentially unpredictable.
I don’t mean to pillory reductionism — it’s had a great run that will likely continue, and has given us some great stuff. But I do think it’s got its limitations — and understanding limitations in as dispassionate a way as possible is kind of the whole point of scientific investigation, no?
Heather, as it turned out, had chosen badly but not too badly — the horse, credit where it’s due, hadn’t stepped on her, and she was pretty scraped up but not really hurt. The horse people, initially wary that we might start screaming for a lawyer, were relieved and then intrigued by the way we shrugged the experience off. You get to a point where you dust yourself off, check for any serious injury, and if you don’t find one, say “I guess I won’t try that again,” and move on. Free will is like that.
I’ll close with a quote from Tolkien, one that I used to open my doctoral dissertation:
“… he that breaks a thing to find out what it is has left the path of wisdom.”
For my dissertation, it worked on so many levels.
Like anything else, the sentiment can be overdone. But it ain’t a bad thought for a sunny spring day, with the farm chores done, some time on your hands that you need to decide how to use, and not a runaway horse to be seen.
[1] His son.
[2] I could be wrong, I’m no theological scholar — anybody knows better, please chip in.
[3] Though, dang it, what of the choice of whether or not to give into the urge?
[4] Anthropomorphism is an modern urban/suburban thing, I think, and thus may be much newer than the 19th-century thinkers who scorned it realized. I think our ancestors were much smarter than us on this account. If you look at hunter-gatherers, pastoralists, or traditional farmers, they seem to have a handle on animal minds that recognizes them for what they are — animal minds, neither human minds in fur coats nor cog-and-wheel machines.
Monday, May 25, 2009
Happy Memorial Day
Sunday, May 17, 2009
Who Goes There?
It isn’t a pleasant memory.
We’d spent the better part of the morning looking for a lost child, hope waning, our fears growing, as we did. Lilly, as always, was doing her part; but the general gloom of a clouded-over, gray sky melded with a kind of cohesive murk within my little dog team.
You see, I was in Another State, working with people who didn’t know me, and my subordinates — local, trainee dog handlers assigned to me at the command post — didn’t like the way I was doing things and weren’t trying very hard to hide it.
Today I’ll admit, rookie handler that I was, I may have been a little bloody minded about doing things by the book. I’d been taught that search teams often missed search subjects who were on the boundaries between search tasks, and so I was intent on avoiding that by covering just a tiny bit beyond my assigned area. My two teammates might well not have batted an eyelash at that, but in this particular case it required us to cross a boggy little creek and tramp along the swampy opposite side.
Frankly, I thought at the time, and may well have been right, that they were just being lazy. Maybe I didn’t try very hard to hide that [1].
Soon after, gridding inward from the creek, I looked up to see a disturbing thing: a crumpled little body dressed in white, lying on the ground. One of my walkers must have seen it the same moment I did, because we both paced toward it — quickly but not at a run, in what for me, at least, was a moment of profound ambivalence.
And we walked up on a crumpled, white plastic bag, lying on the ground.
“I thought …” my walker began.
“Me too,” I replied to her, too relieved to say more.
But I had seen that body …
The morning of July 3, 1863, and the legendary Robert E. Lee was looking up the long, naked slope of Cemetery Ridge toward the Federal Army of the Potomac, dug in on its summit. What he saw was a thinly protected line, denuded by virtue of the fact that his opposite number, George Meade, had been pulling men out of there to protect his flanks from the brutal strikes Lee’s Army of Northern Virginia made on them the previous day.
The Confederates had swept the Federals off Seminary Ridge, to the west, on July 1; on the 2nd they’d failed to dislodge the northerners from Cemetery Ridge, but it had been very close. Now Lee could see their weakness in the center; he could feel that they were a push away from crumbling, as they had done so many times before.
Trouble was, Lee was in the decided minority among his own army. James Longstreet, one of his best generals and the man to whom he would entrust the upcoming attack, had been arguing since the previous day that it was doomed to fail. Neither of the two could have known that, technically, Lee was right, in that Meade only had about 5,000 men defending a ridgetop about to be hit by 11,000 Confederates. But Longstreet, even without the virtue of hindsight, could see that the position was so strong — Longstreet’s men would have to walk, under artillery and rifle fire, for nearly a mile in the open, before reaching the Federal line — that it wouldn’t take very many men to stop them.
“General,” historian Shelby Foote, in his magnificent tome about the Civil War, quotes Longstreet, “I have been a soldier all my life … and should know as well as anyone what soldiers can do. It is my opinion that no 15,000 men ever arrayed for battle can take that position.”
It’s emblematic that Longstreet got the number of men in his own command wrong — for one thing, a couple of the divisions he would send into combat were leant from another general’s corps; for another, he understandably hadn’t kept up with the massive casualties his own corps had been taking over the previous two days. But Lee saw it differently, and gave him a direct order. Longstreet, in turn, ordered the charge — with a voiceless nod to division commander Major General George Pickett, captured in agonizingly accurate visual detail in the movie Gettysburg. And an attack began that ended with Pickett, upon Lee telling him that he needed to gather his retreating men to defend against a possible Union counter-strike, replying in anguish: “General Lee, I have no division now.”
He was exaggerating; his division had suffered only 60 percent casualties.
Lee had seen that the Federal center was weak …
It’s happened with nearly every search dog we trained, and, almost by the calendar, at exactly one year of age. In a dusk training problem, the dog encounters the practice subject unexpectedly, because the wind is blowing the wrong way.
The dog, seeing a human — their eyesight is quite good, particularly at twilight — but not smelling him, sees … an ogre. A dog who has learned the find-refind-lead the handler back sequence, has performed the routine with unerring fidelity until this moment, not only refuses to approach. She barks that shrill but powerful panic bark you usually hear only from adolescent dogs.
You have to jolly her up, reassure her (without coddling) that she’s wrong, there is nothing to fear there, just a person, maybe someone she already knows, and it’s perfectly safe to approach. She does, and greets the subject with over-the-top affection and what looks to all the world like embarrassment. Seldom does the dog need this treatment more than once; she’ll be confidently making dusk and dark finds without a hitch.
But that first time, she sees the monster …
It’s a borderline cliché that we see what we expect to see. But Hendrikje Nienbord and Bruce Cumming from the National Eye Institute have produced new findings that suggest that the very wiring of our nervous system conspires to delude us.
The Eye Guys took monkeys and trained them to perform a task that depended on whether the center of a circular pattern displayed before them was protruding or receding; they then recorded the activity of sensory neurons from the eye that were sending visual signals up to the brain.
The general idea had been that the eye detects the light pattern, it sends signals through the sensory neurons to the higher brain, and then the brain decides whether the dot is approaching or receding. But it didn’t work out that way.
Bear with me on this one, it’s a real head-banger of a paper, very dense stuff to parse out.
* As the subtlety of the choice got harder over time, the coupling between the visual neuron’s activity and the accuracy of the monkey’s choice (in other words, the nerve saying, for example, “innie,” and the monkey making the “innie” choice) didn’t decrease, which is what you’d expect if the signal only traveled from eye to nerve to brain. Right or wrong, the nerve cell’s activity — remember, it’s supposed to be sending signals up to the brain — was more and more reflecting what the monkey was going to choose rather than what it had to be seeing.
* Increasing the reward increased the accuracy of the monkey’s choice, but decreased the coupling between nerve cell and choice. This rather upside-down result made more sense when they dissected what was happening over time: for a window of about one second, the larger reward got the monkey to focus on the image rather than its expectations. After that second, it started to see what it expected to see — again, the sensory neuron was tracking more closely to what the monkey was going to choose than what it actually saw.
* If I understand the paper correctly, the relationship between the nerve cell activity and the choice was strongly affected by what the monkey had seen previously. Again, the monkeys’ visual experience was tracking with their biases, not the image being shown to them.
Like I said, it’s not an easy one to think through: but taken together, it looks like the signal didn’t move from the eye to the nerve to the brain as much as the eye and the brain argued over what the nerve cell was going to do. And the brain often won; the monkeys were seeing what they expected to see.
As an accompanying article in that issue of Nature quotes Cumming, “In a way, the brain is tampering with the data.” The reason? As he told Nature, it may be that it’s better to have a preconception ready to act upon than to wait on the facts — and get squished in the meantime.
Still, that imperative only goes so far — maybe a second or two. The next time you have more time than that to decide something important — God forbid, of life-and death — and are sure you see something, take another look.
[1] I don’t remember if, at the time, it had occurred to me that there would be a political price to pay for my insistence. Political trouble did, in fact, come from that direction, well over a year later. Today I know that a number of factors made this come about; what I don’t know is whether their report back to their teammates that day was one of them.
Saturday, May 9, 2009
Smoke and Mirrors II
I was the hypocrite.
In no particular order, I chatted amiably with a person I detest; bit my tongue when an opportunity to dish on incompetence, with a person who showed every sign of being sympathetic, presented itself; and pretended I wasn’t angry over a situation that was frustrating the hell out of me.
The reason I did all these things was neither an over-active sense of propriety, nor an unwillingness to be confrontational.
You see, I figured I had something to gain.
In each of these situations I walked away with a bit of information I didn’t have earlier; made, hopefully, just the right kind of impression for my purposes; put off a battle for another day, when I could wage it from a position of increased strength.
I have become a game person. I take very little joy in it, but it’s a fact.
That statement probably requires some explanation. A while back I read an essay [1] arguing that there are, essentially, two types of intelligence: puzzle intelligence, and game intelligence. The former is what all us nerds are born with; it is the ability to figure out puzzles, to tease apart the workings of an essentially static system no matter how complex. It’s how you figure out the structure of the DNA molecule, the equations that describe the motion of an object near the speed of light, or how an animal’s sniffer works.
Game intelligence is another thing entirely; it’s the ability to outwit an intelligent opponent in a contest that has no one solution — you have to be adjusting your strategy continually to counter the other player’s. No two contests will be the same, and so the rational faculties that break a puzzle apart, while still useful, aren’t alone sufficient to win.
That, by the way, is the ultimate answer to the wiseass crack, “If you’re so smart, why aren’t you rich?” Fact is, money is a game — and therefore many of the people we see as smart are nevertheless ill-equipped to compete for it [2].
I’m not rich; Lord knows. But I have been able, over the years, to pick up a few pointers on how to face off against game people. I don’t like it; but I need to be able to do it, and so, after a fashion, I’ve learned its ways.
In reflecting on the images I cast — the image in my own mind, the image I attempt to project, and the image received, all of which I know all too well may be very different from each other — I see another connection to enantiomers, those (usually organic) molecules that are chemically identical, but spatially different: non-identical mirror images.
Last time I used a really pretty image that, because I didn’t look at it closely enough, showed two enantiomers but didn’t make it clear that they were mirror images of each other. So let’s try an uglier version, of my own construct, to show the point (again, the dark triangle shows a molecule or chemical group coming out of your screen; the dashed-line triangle shows one going away from you, into the screen):
Although free-solution chemistry can’t really distinguish between the two, an enzyme or other biomolecule, which by definiton reaches out to touch each of these in space, quickly realizes that they’re different:
No matter how you rotate these two, they can’t match up.
Nature has made stunningly complex use of this phenomenon; we’ve already discussed how and why sometimes the olfactory system can tell the difference between enantiomers and sometimes it can’t. Today we’ll discuss a paper by Yuko Ishida and Walter Leal from UC Davis investigating how two closely related species of beetle use enantiomeric pheromones to find the right mate — and avoid the wrong one.
Closely related species, particularly when they’re not separated by a physical barrier like a mountain range or the like, present a major challenge to the evolutionary process. It’s easy keeping a moth from mating with a whale. But two closely related animals — their species separated, perhaps, by specialization to take better advantage of two different food sources — will have a much harder time not inter-breeding, and thus losing the advantage of that specialization. Pheromones — airborne chemical social signals — help keep the two apart.
In the case of the Japanese beetle Popillia japonica and the Osaka beetle Anomala osakana, the two species coexist in nature but don’t inter-breed. Part of the way they’ve accomplished this is that each species has chosen a different enantiomer of the same molecule — either (S)- or (R)-japonilure — as a mating pheromone. (S)-japonilure is a sexual attractant for the Osaka beetle; the (R) enantiomer is an attractant for the Japanese beetle.
It gets even more interesting. The Japanese beetle doesn’t merely “get no kick” from (S)-japonilure; the molecule repels the little buggers. The evolutionary process, then, has double assured that Japanese beetle bachelors don’t hook up with osakana chicks.
Which is where Ishida and Leal’s work comes in. From the antennas of japonica males they’ve isolated an enzyme that chews up both molecules — but it’s measurably better at chewing up the attractant than it is the repellant.
The difference isn’t huge — the half-lives of the attractant and repellant, respectively, in the presence of the enzyme are 30 thousandths of a second versus 90 thousandths of a second. But in the world of olfactory response, that’s a fairly big difference; as our authors note, following an intermittent, turbulent scent plume to its source requires a complex series of decisions based on intensity of smell, wind direction, and attack angle. As dog handlers have noticed and moth researchers have documented, it often takes a lot of dashing, casting, and weaving to home in on the source of a smell. You need to be able to detect changes in your attractant quickly to find its source; and therefore, it’s useful to destroy an attractant as soon as you’ve detected it. Clearing the olfactory palette as quickly as possible leaves you better prepared to detect the next change [3].
The situation is very different for a repellant. You don’t need to find its source; you don’t want to get anywhere near its source. Letting it stick around a little longer is therefore a good thing: it helps stop you, literally, from even going there.
So there we have it: Three species, two games, two sets of smoke and mirrors. One clouds the issues; the other makes them very clear.
I won’t engage in the pseudo-philosophical (let alone exaggerated) species-bashing of comparing nature’s beauty to mankind’s brutality. Evolution itself, I realize, is the biggest game of all. I will play the game as long as I need to. But on the whole, I prefer the puzzles.
[1] Sorry, it’s been way too long and I don’t know where I read it.
[2] Yeah, yeah, they all say they’re not interested in money — but might that not be because they’re not interested in the games that go with it?
[3] There’s an old dog-handlers’ tale out there that dogs’ noses are “so sensitive” that they don’t desensitize like ours do — think of how, after a while in a room with a strong floral scent, you don’t notice it. Well, don’t you believe it: desensitization is a powerful tool for keeping the nose maximally sensitive to changes in an odor. Our dogs couldn’t do without it.