Robert Wright, "Ethics for Extraterrestrials"
By Robert Wright
At the outset I should concede that there are differences between us and any given race of space aliens. We alone (to take just one example) have used advanced technology to make a TV show called “Jersey Shore.” Still, we — like, presumably, any intelligent species anywhere — were created by natural selection, for better and worse. And, like any scientifically advanced species, we’re finding that the laws of the universe grant the technological potential for both mass affiliation and mass murder. So the question is which aspect of this technology our naturally selected nature would incline us to emphasize a century or two from now, should we stumble upon an inhabited planet.
On Hawking’s side of the argument is the fact that natural selection does create organisms prone to belligerent self-interest. And when individuals manage to submerge their self-interest in the interest of a group — clan, tribe, nation — the belligerence tends to just move to the group level, as it did when European explorers, while behaving very politely toward one another, slaughtered natives. As the biologist Richard Alexander has put it, the flip side of “within-group amity” tends to be “between-group enmity.” So why wouldn’t an alien species evince this principle, and unite only to conquer?
A slightly less hopeful argument has been made by — well, by me. In my book “Nonzero” I argue that the moral progress Singer rightly celebrates has been driven less by pure reason than by pragmatic self-interest. Technology has drawn groups of people into more and more far-flung “non-zero-sum” relations — relations of interdependence; increasingly it has been in the interest of one group to acknowledge the humanity of another group, if only so the groups can play win-win games. In this view, the decline of American prejudice toward Japanese after World War II was driven less by purely rational enlightenment than by the Japanese transition from mortal enemies to trade partners and Cold War allies. (In this TED conference talk, Steven Pinker, who is writing a book on the decline of violence, contrasts my view with Singer’s.)
If I’m right, and we generally grant the moral significance of other beings to the extent that it’s in our interest to do so, then why wouldn’t we, in 100 or 200 years, do what Hawking imagines aliens doing — happen upon a planet, extract its resources through whatever brutality is most efficient and then move on to the next target? Absent cause to be nice, why would we be nice? Well, you could make a case that, though our moral “progress” to date has been driven largely by self-interest, with only a smidgen of true enlightenment, the role of enlightenment will have to grow if we are to venture beyond our solar system a century from now.
After all, to do that venturing, we first have to survive the intervening 100 years in good shape. And that job is complicated by various technologies, notably weapons that could blow up the world. More to the point: these weapons are now embedded in a particularly dicey context: a world where shadowy “nonstate actors” are the looming threat, a world featuring a “war on terror” that, if mishandled, could pull us into a simmering chaos that ultimately engulfs the whole planet. And maybe “winning” that war — averting global chaos — would entail authentic and considerable moral progress.
That, at least, is a claim I make in my most recent book, “The Evolution of God.” I argue in the penultimate chapter that if we don’t radically develop our “moral imagination” — get much better at putting ourselves in the shoes of people very different from ourselves, even the shoes of our enemies — then the planet could be in big trouble. It’s not crazy to think that, broadly speaking, this sort of challenge would eventually face an intelligent species on any planet. Certainly the challenge’s technological underpinning — that the capacity to escape your solar system arrives well after the capacity to destroy your planet — could reflect the order in which the laws of physics reveal themselves to any inquisitive species, not a peculiar intellectual path taken by our species.
So maybe any visiting aliens would themselves have passed this test; they’d have mustered the moral progress necessary to avoid ruining their planet, and this progress would involve enough genuine enlightenment — enough respect for sentient life — that we’d be safe in their hands. This is less than wholly reassuring. After all, the suggestion here is that before any species can shift into high technological gear, it has to undergo a moral test so stringent that most species would fail it. In which case the chances are we’ll fail it — and our best hope may be to hold on long enough for kindly space aliens to ride in and save the day. Things would be a lot simpler if it turned out that Peter Singer is right. And for all I know he is."
0 Response to "Robert Wright, "Ethics for Extraterrestrials""
Post a Comment