Advertisement

Neuroethics beyond genethics

Despite the overlap between the ethics of neuroscience and genetics, there are important areas where the two diverge
Adina L. Roskies

Author Affiliations

  • Adina L. Roskies, 1 Department of Philosophy at Dartmouth College, University of Sydney, Hanover, NH, USA NSW, Australia

There has been considerable scepticism in many quarters regarding whether neuroethics should be recognized as a subfield of bioethics. Although there are pragmatic reasons for believing that we ought to think twice before officially identifying neuroethics as a distinct field, the overriding reason for positing a new field is that it confronts new questions. Thus, the central issue is whether neuroscience raises ethical questions that differ substantially from those raised by other fields in bioethics. As genethics—the ethics of genetics—precedes neuroethics by several decades and seems to raise very similar ethical questions, I address here whether these two subfields differ substantially.

It would be a mistake for neuroethicists to ignore or discount a rich body of relevant work merely because it treats genes rather than brains

As these territories largely overlap, neuroethics can and should look to previous work in genethics for guidance and insight (Illes & Racine, 2005). It would be a mistake for neuroethicists to ignore or discount a rich body of relevant work merely because it treats genes rather than brains. Examples of issues that are common to genethics and neuroethics include: the ethics of access and consent, such as who can obtain information about a person's genome or brain, and to what information they can have access; the social implications of the misuses of that information; questions of distributive justice; how to handle probabilistic or statistical information about future health; and the vexing question of how to conceptualize and identify pathology and normality. Although the ethical questions relevant to both genethics and neuroethics are considerable, I highlight questions that I consider unique to neuroethics. And lest the reader think that genethics is therefore best viewed as a subset of neuroethics, I point out that genethics has its own proprietary issues, among which are questions raised by the potential of making genetic changes to the germ line, that would affect not only the person whose genome is manipulated, but potentially also future generations and, in unlikely scenarios, the entire human race.

Here, I cover three areas in which questions arise in neuroethics but are not mirrored by questions in genetics: consciousness; decision‐making, control and free will; and understanding moral cognition. There are others. My goal is not to exhaust the territory or defend any particular views, but merely to illustrate areas in which I expect future work to prompt novel neuroethical thought.

Consciousness is perhaps the greatest mystery in science: How is it that a three‐pound mass of tissue can give rise to the thing we call consciousness, awareness, or a subjective experience? How can a collection of relatively simple—albeit highly organized—physical components make it possible to experience pain, a particular shade of red, or the ephemeral smell of a fine burgundy? So far, both science and philosophy have been stymied by this problem, and it would be folly of me to suggest that neuroscience is on the verge of answering it. But to understand this mystery remains an aspiration of many neuroscientists, and it seems that if any scientific enterprise is to shed light on the question of consciousness, it will be the brain sciences.

The question of consciousness has primarily been, and remains, a philosophical question, but it is now inescapably also a scientific question

Our inability to conceptualize how such a thing as consciousness can arise from brain activity has been offered as an argument against materialism (Chalmers, 1996). Other philosophers instead suggest that such reasoning is mistaken, and that the inability to conceive of how consciousness could arise from mere matter simply reflects the poverty of our current understanding of brain function (Stoljar, 2006). The question of consciousness has primarily been, and remains, a philosophical question, but it is now inescapably also a scientific question. Neuroscience is increasingly in a position to address the questions ‘What is consciousness?’ and ‘How is consciousness possible?’ Specifying the hallmarks of consciousness, when we are entitled to attribute it to other beings, and what sorts of rights or considerations it entails remains a philosophical endeavour. However, when we find organisms that we think possess consciousness, we can now investigate its physical basis; indeed, a growing number of scientists are devoted to discovering the neural correlates of consciousness. Determining what consciousness is and what is conscious should be thought of as a joint project between philosophy and neuroscience.

A number of ethical questions accompany the scientific questions, because the demystification of consciousness, if it occurs, will undoubtedly affect how we think of ourselves, will almost certainly impact religious beliefs, and will probably have ramifications for how we understand our place in the natural world, as well as the place of other organisms. Realistically speaking, such a scenario is far in the future, if it is possible at all. But we must not thereby conclude that the issue of consciousness has no bearing on ethics in the short‐to‐medium term. Long before the problem of consciousness is ever solved, there will be related questions that arise in ethics.

Severe brain trauma can leave a person in impaired states of consciousness, such as a minimally conscious state (MCS) or a persistent vegetative state (PVS). In the USA alone, there are as many as 112,000–280,000 MCS patients and 14,000–35,000 PVS patients (Steinberg, 2005). Although these patients differ subtly in degree of damage, both groups are characterized by a lack of awareness of self and environment, as assessed by their inability to respond to a variety of stimuli. Although basic functions like sleep/wake cycles and respiration remain intact, higher cognitive functions are not evident. The financial and emotional costs of preserving the life of someone in a vegetative state are considerable, and there are excellent arguments for terminating life support if these people are not conscious and will never again regain consciousness. If such an individual is aware, commonsense morality speaks against removing life support, but if he or she is unaware, what to do remains tendentious. Views on both sides of these matters are deeply held, and have deep religious resonances. In past cases, such as the highly publicized decision to end life support for PVS patient Terri Schiavo in Florida, USA, in 2005, decisions were based upon opinion and wishful thinking rather than scientific or medical facts. With a growing neuroscientific understanding of consciousness, future decisions might be far more informed.

What can neuroscience tell us about the conscious state of brain‐damaged individuals who are verbally and physically unresponsive? Some inroads have recently been made in determining these patients' states of consciousness, but not without generating controversy and public misconception. In 2005, Nicholas Schiff from Cornell University (Ithaca, NY, USA) and colleagues used functional magnetic resonance imaging (fMRI) to scan two MCS patients while reading them personalized narratives (Schiff et al, 2005). In response to emotionally neutral speech, these patients had increased brain activity to personal narratives in areas comparable to those of a normal control group, despite a much decreased resting brain metabolic level. However, their response to time‐reversed, meaningless stimuli was reduced compared with controls. The researchers concluded that these patients had preserved functional networks for processing speech and meaning, and speculated that these could be involved in awareness.

This research received a lot of media attention, due to its publication at the height of the Schiavo controversy. Some heralded it as evidence that Schiavo was in fact conscious and should have been saved—despite the fact that she was in a PVS, not a MCS, and that making generalizations to entire populations from single case studies is unwarranted. Those who lobbied for Schiavo's continued maintenance regarded this study as vindication for their views. However, the implications of the study are unclear, and were misunderstood and misused, partly for political gain and partly because of a lack of public understanding of the relevant factors in such a study. The authors demonstrated that in two MCS patients, widely distributed areas of neural tissue remained viable and connected to their normal input faculties, and were capable of gross patterns of processing similar to those found in normal individuals. Indeed, their findings were not surprising, considering that these MCS patients were occasionally aware of their environment and were able to respond to verbal stimulation.

Imaging has provided us with a window into the state of consciousness of a patient who is unable to outwardly respond

However, demonstrating the integrity of some regions of neural tissue is a far cry from demonstrating the integrity of a neural system capable of sustaining complex cognitive functions. Furthermore, a number of factors make these experimental results difficult to interpret beyond recognizing that networks of brain areas were physiologically intact, not least of which is the fact that MCS patients displayed differentially reduced responses to reversed speech compared with normal individuals. Moreover, the implication that responses of brain areas to verbal stimuli indicates comprehension or consciousness is misleading. We know that verbal stimuli trigger quite a bit of neural processing in the absence of awareness, in many of the same brain areas that are employed during conscious processing of verbal stimuli. Many priming studies present stimuli of which the subject is unaware, but higher‐order processes can be affected by the prime nonetheless (Dehaene et al, 1998, 2001; Naccache & Dehaene, 2001).

As priming is paradigmatically unconscious, yet priming stimuli activate many of the same areas as stimuli of which a subject is aware, it is evident that merely documenting neural activity in the same brain areas as a control group provided with similar stimuli fails to indicate anything in particular about the cognitive status of the subject. All we can conclude from studies like this one is that some MCS patients retain enough viable neural tissue in distributed networks to show activation on a fMRI scan, with sufficient preserved connectivity to result in apparently normal activation patterns at the macroscopic level. Importantly, these methods provide no particular information about the health or normalcy of the local networks in those regions of the brain.

One might be tempted to conclude that the contributions neuroscience can make to this issue are so limited as to be uninteresting, but this would be too hasty an inference. With appropriate creativity and care, headway can be made on this difficult problem. Several studies indicate that frontal and parietal activations are correlated with conscious perception (see, for example, Dehaene et al, 2006). Additionally, in a recent short paper, Adrian Owen from Cambridge University, UK, and colleagues provided much clearer evidence for awareness in a severely brain‐damaged patient (Owen et al, 2006). Like Schiff and colleagues, they used functional neuroimaging on a patient that had been in a PVS for five months. They also reported that the patient showed normal activity in a network of brain areas in response to verbal stimulation, despite the fact that she was entirely unresponsive to verbal commands. However, the authors recognized the illegitimacy of drawing conclusions about the patient's conscious state from these data, noting the extensive neural processing that can take place in response to verbal stimuli in the absence of awareness (Dehaene et al, 2001; Kotz et al, 2002; Portas, 2000; Rissman et al, 2003). They thus conducted a second experiment, in which the same patient was given verbal instructions to imagine herself in certain scenarios. Rather unexpectedly, she showed sustained activation of brain areas involved in motor imagery when asked to imagine playing tennis, and in other regions involved in navigation when asked to imagine walking through her house. The activated regions overlapped with those activated during the same two tasks in a group of control subjects.

two factors make the challenge to free will from genetics a lame one: […] genetic determinism is a deeply mistaken view; and […] our genes are causally far removed from our behaviours

The beauty of this experiment is that it helps to distinguish between the factors of neural activity and awareness. By asking the patient to do two different cognitively demanding tasks, neither of which requires a motor response, and each of which has a neural signature distinct from the other, Owen and colleagues provide good evidence that a physically unresponsive patient can still comprehend verbal instructions and respond differentially. Their study suggests that the patient was indeed conscious and retained a power of volition or intention. In this case, it seems warranted to infer that the patient was aware of the meaning of the commands and could respond appropriately.

The implications of this study are staggering, not only for understanding what is relevant to consciousness, but also for future decisions regarding the treatment of brain‐damaged patients. Imaging has provided us with a window into the state of consciousness of a patient who is unable to outwardly respond. It is easy to imagine how one might extend these studies not only to determine which patients have some awareness of their environment, but also to communicate with people who retain some awareness and volition but cannot express themselves verbally or with bodily movements. For example, if patients in a vegetative state are mentally aware and able to comprehend and cognize, we could prompt them to associate different types of imagery with ‘yes’ and ‘no’ responses, and then use neuroimaging to monitor brain activity in response to questions in order to understand their desires. This would enable us to significantly improve the quality of these patients' lives, and perhaps most importantly, to respect their autonomy by endowing them with the ability to choose if and when to end their lives.

Although genethics has visited the issues of freedom and determinism, it has done so in the face of genetic determinism. But two factors make the challenge to free will from genetics a lame one: first, genetic determinism is a deeply mistaken view; and second, our genes are causally far removed from our behaviours. Thus, beyond raising the question, genethics has not contributed much insight to discussions about free will.

The brain, by contrast, poses a more potent challenge to free will, because unlike genetics, the relationship of brain to behaviour is subject to neither of the above two mitigating factors. The brain is the proximate cause of our bodily movements, intentional actions, feelings, reactions, and the like. We act as we do in some very tangible sense because of how our brain works, and there are few intervening variables between neural activity and our behaviours. Although it might be philosophically confused to ask for the cause of a person's behaviour, brain activity meets the standard philosophical tests for a cause of our behaviours, and one of the most direct ones at that.

When we think of free will, we typically think of the ability to make decisions freely. Research on monkeys has shown that neural activity during decision‐making in perceptual tasks reflects the accumulation of evidence by neuronal populations representing alternative hypotheses, and that these neural data are nicely accommodated by a temporal integration model of decision‐making (Kim & Shadlen, 1999; Leon & Shadlen, 1998; Mazurek et al, 2003). The animal's decisions are predictable from the neural signatures observed. This research is consistent with the view that decision‐making obeys purely mechanistic rules for determining outcome, consistent with the prevailing scientific view that brains are just complicated biological machines. There is no reason to think human brains work differently. That strengthens the commonsense worry: if the brain is just mechanism, can we have free will?

Although the problem of freedom is a difficult one, it is not neuroscience that gives rise to the problem. Indeed, any physicalist or materialist view of the world gives rise to a paradox about freedom: either the world is deterministic, in which case all events are due to natural law and there is no freedom, or the world is indeterministic, in which case events occur at random and cannot be attributed to volition. Either way, we are not free. The only thing that can rescue free will from this dilemma is the admission of some extra‐physical force of will, but this metaphysical view goes against prevailing physical doctrine. Moreover, neuroscience can neither address the question of physicalism, nor adjudicate between the seemingly problematic alternatives of determinism and indeterminism.

It is the rhetorical force of the neuroscientific understanding, rather than what it can actually reveal, that causes potential ethical issues to arise

Despite the irrelevance of neuroscience to the problem, by uncovering neural mechanisms that have predictive value for behaviour, neuroscience will probably cause people to doubt the existence of freedom, which might affect their behaviour and their ethical views. It is the rhetorical force of the neuroscientific understanding, rather than what it can actually reveal, that causes potential ethical issues to arise. For example, because freedom is often thought to be essential for moral responsibility, some worry that if people abandon their commitment to the belief in freedom, they might conclude that moral responsibility is likewise an illusion, and nihilism might result. Others concur that neuroscience will undermine our commonsense notion of morality, and that we will be forced to rethink our conception of justice. Joshua Greene and Jonathan Cohen, for instance, argue that our retributivist notions will be indefensible if our commonsense conception of free will is jettisoned, and we will consequently mete out punishment purely due to utilitarian calculations (Greene & Cohen, 2004). Although they champion this outcome, such a considerable change to our conception of justice and the corresponding alterations in our legal system will surely force us to rethink many of our current moral views.

Not all visions of the future are so revisionary. Neuroscience could be the salve for the wound, not just the salt. Despite the possibility that people will come to believe they are not free, it is unlikely that their moral judgements will be altered (Roskies, 2006). Evidence for this comes from experiments on the nature of folk intuitions, which remain committed to notions of moral responsibility even in the face of challenges from determinism, and from understanding the neural basis of moral cognition, in which emotion plays a central role.

Finally, in addition to understanding the neural basis of moral reasoning and judgement, a deeper understanding of the brain might help us to revise our notion of freedom and what it requires, thereby circumventing the standard paradox of freedom that is generated by its seeming incompatibility with both determinism and indeterminism. Recognizing the incoherence of our commonsense conception of freedom might encourage us to develop a robust notion that is anchored in a scientifically informed view of the brain and how it operates. I believe we must look for a compatibilist notion of freedom, perhaps one that contrasts freedom with coercion, as A.J. Ayer (1954) suggested, but also one that paints a positive picture of control. We might come to realize that the type of freedom required for moral responsibility is one grounded in a picture of what the proper functioning of an organism is in a complex social network of others, together with an understanding of what sorts of mechanisms underlie that proper functioning. In particular, a neuroscientifically informed theory of self‐regulation or intrinsic control might be able to ground our most ardently held ideas associated with freedom, and preserve many of our moral and social practices with minimal disruption.

Understanding why and how we have the moral views that we do might provide us with some meta‐level moral insight. Until recently, the only way to study moral cognition was to probe it behaviourally: observe people's actions, or note their answers to questions about situations requiring moral judgements. However, with the prevalence of neuroimaging and its application to increasingly abstract domains of cognition, a new way of probing moral cognition has emerged. We are beginning to get the first glimpses into the brain networks that subserve our moral reasoning, which have proven very enlightening. Others have reviewed what has been learned about the neural basis of moral cognition (Greene, 2003; Moll et al, 2005), so I do not attempt that here. Instead, I concern myself mainly with the ethical implications of what has been learned.

Construed broadly, the neurosciences might give us insight into why we have the particular moral intuitions we do

A number of neuroimaging studies indicate that some of our moral judgements—those in response to what Joshua Greene terms “personal” moral dilemmas (Greene et al, 2001, 2004)—naturally excite areas involved in emotion, thus conforming to a sentimentalist view of moral judgement (Moll et al, 2002). In other situations, our reasoning is more analytical or cognitive, employing regions typically associated with deliberation and rational thought (Greene et al, 2004). The presence of both these modes of moral cognition might correspond to the longstanding debates in moral philosophy between deontologists and utilitarians (Greene, 2003). Although descriptive facts about how we reason give us little cause to think that is how we should reason, they might affect our moral views. As Greene aptly puts it, “…I view science as offering a ‘behind the scenes’ look at human morality… [T]he scientific investigation of human morality can help us to understand human moral nature, and in so doing change our opinion of it” (Greene, 2003).

In addition, our ability to probe the brain's response to different moral scenarios raises the question of what the brain is responding to. Are we in contact with objective moral truths about the world? Or are we taking our relatively automatic and reliable responses to reflect truths that are not there? Greene (2003), for example, suggests that brain science might help us see that universalism in moral judgements—which is questionable on the face of it—might not be due to perception of some objective moral truths, but instead might be indicative of common neural structures and their functions. Whether this is indeed the case, and how we should respond to it, is a question for neuroethics.

Construed broadly, the neurosciences might give us insight into why we have the particular moral intuitions we do. Are there relevant neural and evolutionary facts to consider? How deeply involved is our emotional brain? If our intuitions are indeed driven largely by emotional reactions, we might question whether principlism—the dominant view in bioethics—is an accurate reflection of those intuitions. And if it is not, should we reject the view, or embrace it as a rich and superior normative framework, not subject to the vagaries of biology? The way we answer these questions might be influenced by neuroscience, but they are supremely philosophical, and they might have an effect on applied ethics across the board.

Despite the significant overlap between questions raised by genetics and those raised by neuroscience, there are areas in which the ethical issues raised by the two diverge. Here, I have focused on the ability of neuroscience to illuminate issues involving consciousness, decision‐making and free will, and moral cognition. For the most part, I have only raised questions, although I have tried to give a sense of how current and future research might inform and shape them. The rest is up to future neuroethicists to tackle.

In closing, I note that even if we concede that the questions in neuroethics and genethics are not distinct, it would be a fallacy to conclude that there is no need for neuroethics. Our response to various bioethical problems often depends not only on a general understanding of the philosophical question at issue, but also on a detailed understanding of the biological organ or mechanism in question, the methods or techniques employed to generate the data, a sense of which data are in fact relevant, knowledge of how to properly interpret scientific results, and a sophisticated appreciation of treatments currently or conceivably available. To the extent that proper bioethical analysis depends upon a deep understanding of the science as well as the philosophy, we need people with training in both neuroscience and philosophy to engage with neuroethical questions, just as we need people with training in genetics and philosophy to tackle the problems raised by genethics. The time for neuroethics has arrived.

Acknowledgements

The author is supported by an Australian Postdoctoral Fellowship from the Australian Research Council.

References

Adina L. Roskies is in the Department of Philosophy at Dartmouth College, Hanover, NH, USA, and at the University of Sydney, NSW, Australia. E‐mail: adina.roskies{at}dartmouth.edu