On a summer day quite a few years ago, Lilly and I were waiting for Mel and Heather to find us on a beautiful fallow farm that today is rapidly become a bulldozed memory. Sitting on the lee side of a collapsing barn, I knew they’d have a bit of trouble getting us — scent tends to get trapped on the downwind side of structures — and that therefore Lilly and I had some time to kick back and just enjoy a golden southwest Pennsylvania afternoon.
Lilly, as she often did, was chowing down on the tall, broad grass leaves (they were, I found out later, timothy) growing around the abandoned farm buildings. Now, usually we humans have a (justified) skepticism about the culinary preferences of someone who finds cat shit an irresistibly crunchy snack; but just that once, I decided to give it a try.
I chewed the tough, raspy leaves, and soon got a wonderful note of garlic and green. Very tasty, actually.
That day, I got one of many ongoing lessons about giving my dog the benefit of the doubt.
Heather gave me minor hell for the framing story in What Died? — my skeptical account of Lilly apparently detecting 20-year-old graves. Her basic point was, “Why didn’t you just trust your dog, idiot?”
I think she missed an important point of context, a point that a lot of dog handlers get chronically wrong: when and how you express your skepticism matters to the level of trust you’re showing your dog. To make it more concrete, handlers tend to build their dogs’ performance in retrospect: pure confusion the day of a search becomes, with the hindsight of knowing the subject’s eventual location, a clear indication that the dog was on the right track. Some dog handlers compound it by historically revising their level of certainty: “I knew Sparkie was onto it.”
No, our dogs don’t lie to us: but neither do we always understand what they’re trying to tell us.
Though I can’t hold myself forward as any sort of example, I think we need just the opposite approach. In the context of a search task, we need to be respectful of the dog’s abilities but coldly honest: if it had been a real search, the proper report for the potter’s field alert would have been, “I could be wrong, but Lilly sure looked like she was detecting cadaver. I think we need to check it out.” The proper line to take now, nearly 20 years later, is that I don’t really know for sure what she was doing, though the corroborative evidence I’ve seen encourages me to take that alert at face value.
It’s a matter of context.
Context of a different sort is the gist of today’s entry, care of Jostein Gohli and Göran Högstedt at the University of Bergen, Norway: namely, when does garish coloration make a prey animal safe rather than lunch?
The classic explanation is that prey critters like the monarch butterfly use bright colors to warn predators (a strategy called aposematism) that they taste very bad — short-circuiting the potential problem that tasting bad is a little late to really help you not get killed or seriously hurt by a predator. But that explanation poses its own problem: a bird that tries to eat one monarch won’t try for a second. But that still leaves the first pretty much screwed.
The predators could be genetically disposed to avoiding the bright colors — but that argument just moves the issue further back in time, since at some point the first, behaviorally and genetically naïve, predator had to give it a try.
Gohli and Högstedt put it extremely well: “When aposematism first evolved, all predators were inexperienced and the population of aposematic prey would have been very small. Sampling (killing) would likely have led to an early extinction of this fragile population.”
The explanation, as you may have guessed from the context that I’m talking about it, hinges on smell. If a bad-tasting prey animal also smelled bad, that smell would double down with the bright colors to warn even a truly naive predator off. Both individual associative learning and evolution of the population would strengthen the predators’ reluctance to take the first bite.
But why bother with the color when you already smell bad? While you could argue a number of ways that you get from camouflaged and stinky to neon-bright and stinky, the Norsemen have provided a compelling explanation via mathematical modeling: the stink may actually have driven the evolutionary change in color.
In their computer model, bright colors only tended to get you killed more often, up until a certain level of stink — at that point, the e-animals with subdued coloring tended to get munched more often. And once that potential is there, the small variations generated by genetic drift will inevitably start to push you toward brighter and brighter colors.
It all hinges on the idea that a garish prey animal makes a predator stop and assess rather than jump in to feed, giving it a chance to notice the bad smell. This makes sense; I’ve seen videos of divers chasing after fleeing great white sharks in clear water — while I wouldn’t think this is a bright thing to try in any case, nobody would venture it in murky water, where the fish are known to bite first and ask questions later.
Predation is, after all, an extremely dangerous lifestyle; it pays for a predator to be a bit conservative. Prey fights back, so if you see something you don’t already know is tasty and relatively easy to catch, it makes sense to stop and think rather than risk tangling with something that may seriously injure you (or, in the case of aposematism, poison you).
The theoretical argument even pays itself back. The Viking Veracitators note that, to work on completely naive predators, you need to excrete your stink continuously, even though almost no insects with chemical defenses do this today: they use it only when they need it. The authors’ suggestion: as the heavy lifting shifted from smell to color, continual stink became less necessary to ward the predators off. While they haven’t reported that calculation, it would be nice to see, in future work, whether the bright coloration, once it comes, eases the pressure on the animal to expend the metabolic cost of permanent funk.
Of course, computer modeling ain’t the real world. The experimentalists will need to take this model and run with it, see if it plays out in the field. But it’s a nicely self-consistent argument that seems more than worthy of the experimental verification.
In the context of plausible ideas, it’s a winner.