Béatrice de Géa for The New York Times
In a world of proliferating professions, S. Matthew Liao has a singular title: neuroethicist. Dr. Liao, 40, the director of the bioethics program at New York University, deploys the tools of philosophy, history, psychology, religion and ethics to understand the impact of neuroscientific breakthroughs.
We spoke over four hours in two sessions. A condensed and edited version of the conversations follows.
You’re a philosopher by training. How did philosophy lead to neuroethics?
Mine’s the typical immigrant’s story. My family moved to Cincinnati from Taiwan in the early 1980s. Once here, my siblings gravitated towards the sciences. I was the black sheep. I was in love with the humanities. So I didn’t go to M.I.T. — I went to Princeton, where I got a degree in philosophy. This, of course, worried my parents. They’d never met a philosopher with a job.
Do you have any insight on why scientific careers are so attractive to new Americans?
You don’t need to speak perfect English to do science. And there are job opportunities.
Define neuroethics.
It’s a kind of subspecialty of bioethics. Until very recently, the human mind was a black box. But here we are in the 21st century, and now we have all these new technologies with opportunities to look inside that black box — a little.
With functional magnetic imaging, f.M.R.I., you can get pictures of what the brain is doing during cognition. You see which parts light up during brain activity. Scientists are trying to match those lights with specific behaviors.
At the same time this is moving forward, there are all kinds of drugs being developed and tested to modify behavior and the mind. So the question is: Are these new technologies ethical?
A neuroethicist can look at the downstream implications of these new possibilities. We help map the conflicting arguments, which will, hopefully, lead to more informed decisions. What we want is for citizens and policy makers to be thinking in advance about how new technologies will affect them. As a society, we don’t do enough of that.
Give us an example of a technology that entered our lives without forethought.
The Internet. It has made us more connected to the world’s knowledge. But it’s also reduced our actual human contacts with one another.
So what would be an issue you might look at through a neuroethics lens?
New drugs to alter memory. Right now, the government is quite interested in propranolol. They are testing it on soldiers with post-traumatic stress disorder. The good part is that the drug helps traumatized veterans by removing the bad memories causing them such distress. A neuroethicist must ask, “Is this good for society, to have warriors have their memories wiped out chemically? Will we start getting conscienceless soldiers?”
What do you think?
It is a serious business removing memories, because memories can affect your personal identity. They can impact who you think you are. I’d differentiate between offering such a drug to every distressed soldier and giving it only to certain individuals with a specific need.
Let’s say you have a situation like that in “Sophie’s Choice,” where the memories are so bad that the person is suicidal. Even if the drug causes them to live in falsehood, that would have been preferable to suicide.
But should we give it to every soldier who goes into battle? No! You need memory for a conscience. Doing this routinely might create super-immoral soldiers. As humans we have natural moral reactions to the beings around us — sympathy for other people and animals. When you start to tinker with those neurosystems, we’re not going to react to our fellow humans in the right way anymore. One wonders about the wrong people giving propranolol routinely to genocidal gangs in places like Rwanda or Syria.
Some researchers claim to be near to using f.M.R.I.’s to read thoughts. Is this really happening?
The technology, though still crude, appears to be getting closer. For instance, there’s one research group that asks subjects to watch movies. When they look at the subject’s visual cortex while the subject is watching, they can sort of recreate what they are seeing — or a semblance of it.
Similarly, there’s another experiment where they can tell in advance whether you’re going to push the right or the left button. On the basis of these experiments some people claim they’ll soon be able to read minds. Before we go further with this, I’d like to think more about what it could mean. The technology has the potential to destroy any concept of inner privacy.
What about using f.M.R.I. to replace lie detectors?
The fact is we don’t really know if f.M.R.I.’s will be any more reliable or predictive. Nonetheless, in India, a woman was convicted of poisoning her boyfriend on the basis of f.M.R.I. evidence. The authorities said that based on the pictures of blood flow in her brain, she was lying to them.