You are here

Affective Computer Brain Interfaces and Moral Enhancement: Issues of Control and Acquired vs Infused Virtue

Attached to Paper Session

Meeting Preference

In-Person November Meeting

Submit to Both Meetings

On January 30, 2024 Elon Musk announced that the first participant in Neuralink’s human trials was successfully recovering from implantation surgery, and on February 20 that they can move a computer mouse with just their thoughts. Musk believes that computer-brain interfaces (CBIs) are essential for the survival of the human race since they are the only thing that will allow humans to try and compete with AI, but other are concerned about what we might lose if we allow these devices access to our brains. In this paper I will look at the virtues of prudence and temperance, and how CBIs could be used to reduce the effects of anger or to prevent people from acting violently. The second section of the paper discusses free will, CBIs, and the role religion might play in choosing to use such devices. Finally, I will turn to the question of the relationship between virtue and moral enhancement, comparing CBIs to a current pharmaceutical trend in weight loss. 

Research into affective CBIs is relatively new. However, scientists are trying to find ways to provide neurofeedback to people to help them identify their emotional states. They are also trying to find ways to stimulate or influence the emotional states of people. Other possible uses would be to help people move facial muscles to help them express the emotional states they feel or the emotions they wish to convey, and to be able to identify the emotional states of others in social interactions.

I would argue that these approaches can be placed into two categories, and that these categories have quite different implications. The first category is recognition of emotional states, whether your own or of others. The second category is that of control, whether it is controlling the expression of your emotional states or creating or changing your emotional state. While both categories allow for moral enhancement, only the first category allows for people to acquire moral virtue. The second category is more like a moral shortcut that could help one act more temperately, but it would not facilitate in the acquisition of temperance. It would be more akin to CBIs “infusing” users with temperance.

Providing people with neurofeedback about their current emotional state or how they respond emotionally to something that is said or a situation gives them additional information that they can use in moral deliberation. Just being aware that you are angry can help you have more control over how anger could affect your decision-making. This would allow people to better exercise the virtue of prudence, for example, because they would be aware of how anger could shape their perception of the situation and what the right thing to do is. CBIs of this kind would allow people to choose whether or not to act in a way that is prudent or temperate, etc.

On the other hand, CBIs that could control or change people’s emotional states allow people to arrive at the same end or action, but without the moral development. In this case, the CBI would identify when a person is angry and change their emotional state so they are no longer angry or do not react violently. While this enhances a person’s ability to act morally, it does not actually help them acquire virtue. This virtue is being given to them. If the CBI was turned off or removed, the person would not maintain the benefits.

There are a variety of ethical concerns that arise from the potential use of affective CBIs. Assuming questions of safety, including brain damage, post-surgery infection, etc. can be overcome, the first category, identifying emotional states, raises a primary concern about privacy. Like current wearable technology, CBIs would be capable of constantly gathering data from the person who received the interface. If the CBI can be used to identify the emotional states of others, then it can also collect data about anyone with whom they interact.

For CBIs that can control emotional states, the primary concern would be that of free will. If there is something in one’s brain that can alter their emotional states, how sure can people be that they are doing what they actively choose to do and are not being nudged, manipulated, or controlled? Misinformation and its affects already raise this question, but when the technology is in one’s brain, it raises it to another level. Why might people decide to utilize CBIs to control their violent tendencies if such possibilities for misuse exist?

Religion could be a primary motivation for people to use these affective CBIs. If people believe that nonviolence is a stance that someone should take regardless of the situation or that acting violently is especially sinful or problematic, then this could be reason enough to look to CBIs to regulate one’s behavior. Matthew 5: 28-29 says it is better to lose an eye than to look upon people with lust. Using a CBI to regulate how you feel while interacting with people, some might think,  would be a better alternative than going blind.

The final point I will make is to compare affective CBIs that control emotional states and how that affects the virtue of temperance with current pharmaceutical weight loss treatments affect temperance. Semaglutide (Ozempic and Wegovy) is a type of medicine called a glucagon-like peptide-1 (GLP-1) receptor antagonist. It acts like glucagon in signaling the brain that one is full or satiated. This reduces “food noise” and helps people feel full while eating less, leading to weight loss. Studies are showing that without maintaining behavioral changes that people might have made while on the medication, people are likely to regain weight if they stop taking it. This is a pharmaceutical form of infused virtue, like controlling emotional states would be a CBI form of infused virtue. If medication or CBIs help people make better decisions, should we be concerned that it is not helping them acquire virtue? Does that truly matter if they are healthier and are acting more morally towards other people?

Abstract for Online Program Book (maximum 150 words)

In this paper I will explore the use of computer brain interfaces (CBIs) for moral enhancement. One of the types of enhancement that will be discussed is a reduction of violence. However, this raises questions about control and free will, so while there may be solid philosophical reasons to prohibit requiring this kind of moral enhancement, there may be compelling theological reasons why people might choose voluntarily to do so. The concluding section will focus on the relationship between moral enhancement and virtue. While there is not universal consensus, there does seem to be some agreement amongst scholars that using gene editing for moral enhancement cannot engineer virtue. The question posed here is whether CBIs and their use can bring about virtue, or if they simply allow people to act more morally. My tentative answer is that this is more complicated of an answer than with gene editing.

Authors