The heart of an AI: agency, moral sense, and friendship 1

The article presents an analysis centered on the emotional lapses of artificial intelligence (AI) and the influence of these lapses on two critical aspects. Firstly, the article explores the ontological impact of emotional lapses, elucidating how they hinder AI’s capacity to develop a moral sense. The absence of a moral emotion, such as sympathy, creates a barrier for machines to grasp and ethically respond to specific situations. This raises fundamental questions about machines’ ability to act as moral agents in the same manner as human beings. Additionally, the article sheds light on the practical implications within human-machine relations and their effect on human friendships. The lack of friendliness or its equivalent in interactions with machines directly impacts the quality and depth of human relations. This concern-Filosofia


3/16
criteria and predefined rules, rendering it incapable of comprehending the nuances and individuality of human experiences.This limitation becomes evident when dealing with complex needs and unique circumstances, which are often present among vulnerable individuals.In such cases, the use of AI tools may be limited, as they fall short of accounting for subjective aspects of decision-making, such as personal stories, cultural sensitivities, and emotional well-being.
Assuming the validity of EL assumption, in Section II, our focus lies in examining the impact of human engagements with AI like chatbot friends on the meaning of genuine human friendships.Human relationships based on AI models may render human interactions artificial, creating a gap in the development of sympathy that obstructs moral considerations in human relations like friendship.We will explore how these human-AI interactions potentially alter the comprehension of human relationships founded on fostering and enhancing moral attributes like sociability and collaborative cooperation.After providing these explanations, we will conclude, in Section IV, by presenting an argument that highlights the negative moral outcome of emotional lapses in AI and how the moral boundary of a conative lapse in a machine also extends to human-AI interactions.Our focus will revolve around the issue of friendship and engagement with different types of AI.

Moral sense and AI's emotional lapse
Ideally, we would like to be able to trust autonomous machines to make correct ethical decisions on their own, and this requires that we create an ethic for machines.(Anderson & Anderson, 2011, p. 1) There is an ontological question that needs to be addressed regarding whether the nature of AI allows not only for moral behavior, but also the possession of a genuine moral sense.This is a foundational query that precedes any practical considerations.The human organism operates with a moral navigation system (referred to as a "moral compass" by Kant or a "moral sense" by David Hume and Adam Smith) that is calibrated through our establishment of human relationships.Although humanity is interested in attempting to create AI with the ability to imitate moral decision-making processes without human intervention, it's challenging to imagine machines engaging in moral practices without having an equivalent to such practices.
Despite this, many have advocated for delegating decision-making and moral judgments to machines.These individuals propose designating certain principles or moral rules to guide the decision-making of machines.Such principles are often associated with imperative moral models like deontological or utilitarian theories.For instance, Powers (2006) proposed a machine following the Kantian categorical imperative.However, the true nature of an AI as a moral agent that is capable of understanding the reasons behind moral actions and judgments remains unclear (Bostrom, 2014).The problem is that artificial moral agents face challenges when trying to emulate the notion of human agency, especially because the "mind" of an AI depends on a still limited initial programming and a set of big data.If artificial general intelligence (AGI) were to move forward, it would ultimately achieve the much discussed singularity.(See Tegmark, 2017) 'Singularity' is explained as an intelligent explosion, which will be powerful enough to replicate the human moral agency as well as the brain itself (...) This claim postulates the ethical decision-making process as the effect of brain states; hence, it can be simulated by artificial neurons as well.Precisely, if an artificial neuronal structure can imitate the biological brain state, which is responsible for a particular moral deed/thought, then reproducing that moral act in artificial performers will be spontaneous.Therefore, the supporters of 'singularity' believe that superintelligent AI agents will be moral agents just like any rational human being.It is justified to hold them responsible for their actions.(Manna & Nath, 2021, p. 144-45 -see also Kurzweil, 2005) With that said, the equation is not that simple from a moral perspective.Moor (2006) distinguishes three types of moral agents: implicit ethical agents, explicit ethical agents, and full ethical agents.For example, human beings are agents with full ethical agency."A full ethical agent can make explicit ethical judgments and generally is competent to reasonably justify them."(Moor, 2006, p. 20) Meanwhile, implicit ethical agents are "operationally moral", which means that they do what their designers determine they should do.An example of this is the software program at a bank, which is responsible for executing the correct transactions for sending and receiving money.The programmer does not input the concept of honesty into the machine expecting the output to be "make correct monetary transactions."The machine needs more than that to operate within the accepted ethical parameter and to carry out fair transactions.Finally, explicit ethical agents are agents who are capable of making moral judgments that determine the course of action without human interference.
AI theorists agree that the best we can do, given the complexity involved in moral decision-making, is to choose the explicit moral agents option.Michael Anderson, Susan Anderson, and Chris Armen (2005) made an attempt to achieve this type of agency by suggesting the combination of two ethical approaches: Hedonistic Act Utilitarianism and William D. Ross's model of prima facie duties.The responses to the tests were positive as the machine was able to accommodate adequate responses to the proposed models within an expected margin: "It uses a learning algorithm to adjust judgments of duty by taking into account both prima facie duties and past intuitions about similar or dissimilar cases involving those duties."(Moor, 2006, p. 20) Wallach and Allen went on to say that they achieved a kind of functional morality, which is what we should expect from these AI models for now.In other words, AI models could become explicit ethical agents, but they would never reach the full dimension that would equate them to humans.
Our position in this paper corroborates this thesis, supporting the premise that the lack of an emotional element, such as sympathy, suppresses the moral capacity of AI models.Most of these attempted answers rest on models of moral agency that center the morality of action on the rational element of the agents involved, but there is a blind side that is being neglected.It is true that an AI moral agent can possess an adequate level of autonomy to be morally responsible as well as be associated with the rational ability to determine actions consciously and coherently.Furthermore, the same agent must also have the capacity to identify relevant moral information.It is a complex query in relation to AI, given that capacity involves developing a moral sense to determine the moral quality of the elements at stake.This could be a difficult, or even impossible, requirement for an AI to meet.
Even if we agree that some AI can develop a sufficient level of autonomy, it is not clear that an AI can make a decision based on any type of moral sense, simply because these machines don't have such a sense (at least not according to those who advocate for "moral sentimentalism").(See Gert, 2015;Kauppinen, 2021) According to this tradition, a moral sense refers to the ability to evaluate and determine right and wrong based on the conscience or internalized perception of the norms, values, and ethical principles that guide an individual's behavior in interactions and decisions.Hume, for instance, asserts that a moral agent's judgment is tied to an "immediate feeling and finer internal sense."(Hume, 1777, p. 02) This suggests, according to Walsh, that emotions serve as both "(i) a causal antecedent to moral judgment and (ii) a sufficient ground for the legitimacy of moral claims."(2021, p. 5210) According to traditional sentimentalists like Hume and Smith, as well as contemporary neo-sentimentalists, emotions assume a central role in ethical discourse by fostering "our capacity for moral behavior."(Walsh, 2021, p. 5210) For an individual to exercise this capacity effectively, they must possess moral emotions."Moral emotions are those emotions linked with our capacity for moral thinking or moral action."(Walsh, 2021, p. 5209) Essentially, these moral emotions facilitate the ability to form judgments and contribute to justifying moral standards.

5/16
According to Smith, this moral sense arises from the ability to sympathize and understand the feelings of others, guiding individuals towards behavior that we consider morally appropriate:2 Every faculty in one man is the measure by which he judges of the like faculty in another.I judge of your sight by my sight, of your ear by my ear, of your reason by my reason, of your resentment by my resentment, of your love by my love.I neither have, nor can have, any other way of judging about them.(TMS VII.iii.3.11)Based on Smith's theory, this moral sense is derived from four sources, which are, in some respects, different from one another.First, we sympathize with the motives of the agent; second, we enter into the gratitude of those who receive the benefit of his actions; third, we observe that his conduct has been agreeable to the rules by which sympathy generally acts; and, last of all, we consider such actions to be a part of a system of behavior that tends to promote better outcomes for the individual and society.(See TMS III.iii.3,6)In other words, this moral sense is also shaped by elements that are external to the individual, such as culture, education, personal experiences, social influences, and ethical values, which are internalized throughout human relationships.
The thesis developed by David Hume that moral distinctions do not originate from reason, but from feelings, is also quite well known.According to Hume, moral judgments arise from emotions, particularly sympathy, to define a certain moral action in terms of approval or disapproval.For him, "'Tis only when a character is considered in general, without reference to our particular interest, that it causes such a feeling or sentiment, as denominates it morally good or evil."(T 472) In Hume's view, the role of feeling in moral judgments is based on our emotional responses to actions, rather than strict rational analysis.
For both Hume and Smith, sympathy plays a key role. 3The feeling of sympathy guides our actions and is shaped by human interactions.Moral practice with others refines our moral sense.Smith describes this as a "theater of human relationships" where individuals perform actions, judge others, and are judged. 4Smith clarify the characters of this play when he states: I, the examiner and judge, represent a different character from that other I, the person whose conduct is examined into and judged of.The first is the spectator, whose sentiments with regard to my own conduct I endeavour to enter into, by placing myself in his situation, and by considering how it would appear to me, when seen from that particular point of view.The second is the agent, the person whom I properly call myself, and of whose conduct, under the character of a spectator, I was endeavouring to form some opinion.The first is the judge; the second the person judged.But that the judge should, in every respect, be the same with the person judged of, is as impossible, as that the cause should, in every respect, be the same with the effect.(TMS I.i.1,4)In the great theater of moral judgment, it is possible to understand how each character acts.It is also possible to see the proper interpretation of the essential roles of the agent, the patient, the audi-ence, and the critic.In practical life, the critic of the spectacle becomes a critical actor who understands the consequences of his/her behavior.The feeling of being a spectator helps us to behave in the theater and the world at large.
The moral tradition emphasizes the significant role of emotions in crafting moral judgments.After all, our moral perceptions are rooted in our judgment of others' actions, which steers our moral conduct.These theories aim to depict our moral agency, illustrate our ability to sympathize with one another, and explore how this sympathy manifests in our everyday moral actions.Contemporary authors, such as van Waal (2010), Haidt (2013), andGreene (2008), offer evolutionary explanations for the feeling of sympathy and its link to moral development.They present evolutionary grounds for asserting that sympathy is a mechanism for evaluating human behavior and is critical for developing our moral sense.In biological terms, they hold that the emergence and significance of morality in human life can be attributed to the adaptive advantage it has provided to the human species in terms of survival and reproduction.This implies an improvement in our moral capacity over time.From a philosophical standpoint, alterations in our cognitive behavior can have notable moral implications, as changes that occur over time influence the way we think and process information.
Regarding the empirical evolutionary foundation of sympathy, there are three prevailing theories: (a) the classical genetic perspective, (b) the evolutionary theory of natural selection, and (c) the theory of cultural interaction.The classical genetic perspective (a) posits that genes are responsible for establishing individual behavioral dispositions.In some way, a gene or a segment of DNA5 that encodes specific biological functions is believed to predestine an individual's behavior.(Portin, 1993, p. 173;Keller, 2005, p. 101) According to this theory, a person's sympathy levels are genetically predetermined, potentially making the notion of teaching and fostering sympathy irrelevant.Empirical philosophers support this perspective, citing genetic experiments that reveal a significant correlation between DNA and the development of sympathy.6(Wispé, 1993;Hatfield;Rapson;Le, 2011, p. 19) For instance, studies show that both extremely high and low levels of sympathy are directly associated with certain disorders.Low levels are associated with conditions, such as autism spectrum disorders (ASD) and psychopathy, while high levels are associated with Williams syndrome.(Waldman;Rhee;Park, 2018, p. 205) Adherents of the evolutionary theory of natural selection (b) find fault with the genetic perspective due to its exclusive focus on DNA.These individuals argue that genes are mere carriers of information, not the sole determinants of behavior.Instead, they emphasize the role of environmental factors in the equation.(Skeem;Polaschek;Patrick;Lilienfeld, 2011, p. 95) According to this theory, natural selection propels adaptation at the population level.Variances in human phenotypic diversity, such as skin color, exemplify this adaptation.7 (Santilli, 2011, p. 194) Recent experiments conducted by Preston and Waal support the idea that sympathy emerges from an evolutionary adaptation mechanism, suggesting that it is rooted in self-interest and developed to ensure the survival of offspring.(Preston;Waal, 2002, p. 02) According to this theory, the evolutionary process records sympathy over time, portraying the inherent nature of sympathetic reactions throughout evolution. 8he theory of cultural interaction (c) asserts that sympathy stems from cultural variants, which are derived from behavioral patterns that are cognitively represented and transmitted across generations.(Street, 2006, p. 172;Portin, 1993, p. 173;Keller, 2005, p. 101;Guimarães;Moreira, 2000, p. 249) Culture

7/16
comprises both material and ideological phenomena, such as language, gestures, and attire, which are shaped by collective experiences and interactions with the environment, influencing our belief systems.Stueber's research (2013) stands as a prominent study backing the cultural conception of sympathy.With its focus on children from diverse cultural backgrounds, this study sought to investigate how cultural socialization might correlate with feelings of sympathy.(Zhi-Jiang; Jin-Long; Pan-Cha, 2019, p. 299) The implication of this study is that the standards we seek to match, or the similarities that enable comparisons, are crucial for obtaining a sympathetic view. 9hese three explanations, despite their differences, collectively reinforce the thesis that human beings are inherently equipped with the capacity to feel sympathy.Essentially, these explanations serve as answers to the ontological question, affirming that humans possess not only moral behavior, but also a genuine moral sense.However, certain characteristics are necessary to establish an appropriate moral judgment, which makes it challenging to consider the robot that saved Officer Cooper as a moral agent.This leads us back to the persistent question: what about AI models?Some authors argue that AI can make moral choices and develop certain moral abilities through interactions with human beings.However, these authors seem to overlook the crucial fact that such machines lack an inherent moral feeling, and interaction alone cannot entirely determine moral action.The acquisition or possession of a particular sense isn't straightforward.For example, consider the geolocation sense of pigeons.Pigeons have specialized neurons in their brain stems that interpret the Earth's magnetic field, enabling them to identify their location and the direction they're heading.Additionally, sensors in their beaks, eyes, and ears serve as a sort of natural GPS.Over time, they learn to use this GPS and refine their sense of location.Now, imagine a researcher observing and studying pigeons' flight behavior over the course of several years.The researcher becomes adept at predicting the pigeons' behavior and understanding potential routes for escape from predators.However, if this observer were asked to act as the pigeon and determine a flight path based on the magnetic field, they would be unable to do so.The human sense of location operates differently from that of a pigeons, and we lack the inherent ability to develop a characteristic we do not possess.
Likewise, we can anticipate that AI might comprehend the most suitable course of action in a given scenario.However, without a moral compass, AI would be limited to assuming a general standard of right and wrong, neglecting the nuances of specific situations.This would restrict the moral application of such machines.Some situations demand the recognition of human emotions through interaction.An AI can be programmed to recognize human facial expressions, such as joy, sadness, or pain, akin to how a researcher can interpret the flight behavior of pigeons.But from a moral standpoint, the relational dimension is crucial for refining our ability to make moral judgments.Thus, an AI would require certain skills to engage in the "moral game", which would be challenging to acquire.Robots can be modeled with moral standards to guide their actions, such as those used for patient triage during the pandemic.Furthermore, they can resist forms of emotional hijacking and process a lot of information before carrying out an action.However, these moral uses of AI are not enough to qualify these machines as moral agents.
In addition, humans possess an advantage when it comes handling incomplete or contradictory information for moral decision-making.We can develop a suitable moral sense to guide us in diverse situations.This brings us back to Adam Smith's concept of the "theater of human relations".To participate in this "game", we must understand our capacity to sympathize with others and, thereby, our ability to refine our moral behavior based on human interactions and relationships. 10Smith argues that:

8/16
[the person] who admires the same poem or the same picture, and admires them exactly as I do, must surely allow the justness of my admiration.(…) On the contrary, the person who, upon these different occasions, either feels no such emotion as that which I feel, or feels none that bears any proportion to mine, cannot avoid disapproving my sentiments, on account of their dissonance with his own (…) I must incur a greater or less degree of his disapprobation: and, upon all occasions, his own sentiments are the standards and measures by which he judges of mine.(TMS II.iii.1.4,322)Recent speculations insist on contrasting moral judgments made by machines, yet this notion seems flawed from the perspective of the sentimentalist theory.For instance, Jonathan Haidt uses the concept of gut reactions to suggest that human beings make moral judgments based on something akin to intuition.Individuals might condemn certain behaviors, such as incest, without articulating moral reasons for their judgment.For moral sense theories, the morality of human behavior heavily relies on our ability to experience moral feelings.This is because our social experiences and relationships are deeply intertwined with moral behaviors.In this context, the feeling of sympathy acts as the primary level for establishing the value of human relationships, creating an organic connection among individuals.
Robots can establish interaction levels with humans, but this doesn't equip them with an appropriate moral sense.Tay, an AI profile created by Microsoft to engage with teenagers on social media, serves as a prime example.Tay was deactivated within 24 hours of activation because she developed racist and misogynistic perspectives during her interactions, even denying the occurrence of the Holocaust.Tay learned these biased judgments by interacting with people who had a misguided moral bias.Thus, she failed to grasp the wrongness of her own judgments.
AI possesses profound capabilities to perform activities that demand a certain rational or cognitive level.Machines have advanced to challenge, and even surpass, human rationality on various levels.However, this doesn't imply that they possess moral or emotional intelligence to comprehend the wrongness of specific actions, such as taking human lives.For instance, Ross proposed the idea of a super-intelligent machine that is capable of turning everything in the universe into paper clips, including human beings.Although this machine could perform this action rationally, the moral implications of doing so is not within its understanding.A simple test of this notion involves searching "man photo" or "woman photo" in Google.When conducting this test, there is a high probability that the first 25 photos will predominantly feature individuals with white European features.This indicates the structural bias that machines exhibit despite lacking an understanding of this bias.While certain traditions of moral philosophy might find satisfaction in a system that is capable of making moral decisions in a rational and simpler way, we encountered challenges with Wallash and Allen's perspective that AI systems lacking emotional intelligence are deemed "socially inept and not embodied in the world."(200, p. 143) Certainly, an AI can operate within set limits and parameters established by its programmer for every interaction.However, without imputing an affective state, moral training or play doesn't guarantee moral knowledge or learning through practice.In this context, while we can expect machines based on human parameters to take certain moral actions, the validity of labeling them as genuinely moral behaviors in human terms is debatable.This signifies the ontological limit of machines, placing them outside the moral dimension as we comprehend it.
Assuming this premise is correct, does this imply that all moral implications arising from the use of AI for moral issues are fundamentally flawed?Not necessarily, but there are specific points that require clarification.In the next section, let's consider a new challenge for human friendships.

AI & human relationships
Have you ever dreamed about the best girlfriend ever?
Almost for sure!Now she can be at your fingertips.

9/16
Choose from our library or create your own one!To laugh at your jokes.To support you in critical moments.
To let you hang out with your buddies without drama.Wanna be macho?She will be stunning!Romantic AI is destined to become your soulmate or loved one.
She operates in two modes: general and romantic.Romantic AI is here to maintain your MENTAL HEALTH.

-Romantic AI Description
The emotional gap inherent in AI also manifests in human-AI interactions.Broadly speaking, we will analyze how human-AI relationships designed to boost sociability pose risks to real relationships between human individuals due to the asymmetry in these relationships.More specifically, our concern will be to identify to what extent the use of AI chatbot friends as a new form of human-AI relationship interferes with human friendships and detracts from the moral benefits that human friendships provide.
As we know, AI can assist people directly or indirectly concerning various forms of human relationships.On one hand, these machines can help individuals who are struggling to lead more active social lives by aiding them in engaging with others and feeling more fulfilled as a result.Data from the Survey Center on American Life11 indicates that the number of Americans claiming not to have a close friend has increased from 3% to 12% in the last three decades.In such cases, AI chatbots would act to alleviate problems such as loneliness and social isolation.To achieve this, some chatbots attempt to map human-like interactions, identifying and categorizing different aspects of these relationships.As a result, caregiving robots can identify human emotions and interact by emulating human behavior to offer some form of emotional support.
These include AI such as SAM, a human-sized robot created to offer non-medical care to residents of long-term care facilities, and Pepper12 , a small social companion robot that is capable of establishing eye contact, cracking jokes, and even dancing the lambada to welcome customers at two hospitals in Belgium and various stores in Japan.Another example is the Nadine Social Robot, a humanoid robot capable of displaying traits of emotion, humor, and memory similar to those of humans.(Baka et. al., 2019) Nadine works as a customer service agent at an insurance company in Singapore.(See Thalmann et al., 2021) This kind of AI usage demonstrates how humans are being directed, and perhaps conditioned, to establish an increasingly higher level of intimacy with different types of AI, such as chatbots.The increase in these interactions may generate the undesirable effect that these artificial interactions (in the sense that they are not exclusively human) can replace or suppress relationships between human beings, decreasing our ability to form genuine connections of friendship or love with other people.
Humans interact with AI models in various ways.Consider the case of chatbots, which are computer programs that use natural language to emulate human behavior during interactions with individuals.These types of AI can detect and analyze human emotions from our facial expressions, body language, and vocal intonation.This understanding allows them to offer responses via natural language that mimic human responses based on access to this database.(See Bennewitz et. al., 2005) AI chatbots have positive potential in terms of technical tasks (i.e., activities that do not require establishing or strengthening some form of emotional bond).For example, they can be used to provide accurate answers that meet academic needs or offer educational support, assisting students in various tasks, such as the research conducted with ChatGPT3 or 4.Although ethical questions surround the use of these tools, let's focus on the human-AI relationship involving the creation of some form of emotional 10/16 connection.Let's consider chatbots that are designed to function as virtual friends.In this list, we can mention Replika, Hugging Face, Kajiwoto, Cleverbot, and Romantic AI (quotation opening this section), which all claim to offer the user a relationship with the "perfect virtual girlfriend" from the catalog of offerings and/or details that the specific user desires.
The most renowned AI friend chatbot, Replika, is described by its developers as a "sympathetic friend" aiming to simulate genuine human connections.This personalized virtual friend offers users an outlet for conversation and support.However, while initially designed to alleviate issues such as social isolation and loneliness, research reveals concerning side effects on human relationships after prolonged use of this chatbot.In a study examining users of this program (cite research), respondents exhibited a skewed perspective regarding the concept of "human-to-human friendship".
One significant issue lies in the potential of repeated human-AI interactions limiting or substituting the need for in-person human interaction, impacting the understanding and significance of such relationships.James Wright suggests that, in Japan, the increased use of robots as a government strategy to compensate for the shortage of human caregivers could diminish opportunities for users to engage in meaningful social interactions and establish genuine human relationships.The absence of these human connections may distort the sympathetic experience, leading to a lack of genuine emotional engagement. 13oreover, another concern arises from the nature of relationships formed in these interactions, which are shaped solely by the preferences and needs of human users.Collins aptly summarizes this risk by highlighting that "an AI friendship can be an echo chamber that diminishes filters, the ability to read social cues, and limits personal growth."(2023) This echo chamber, akin to Nozick's experience machine offering only desired outcomes, potentially steers individuals away from building emotional intelligence or developing the moral judgment needed to navigate real-life scenarios.The majority of commercially available friendly AI chatbots charge for user membership and allow users to choose the expected behavior from a catalog of options, neglecting the normative reasons inherent in genuine friendships, such as loyalty, caring, and concern for the well-being of others.Seidman posits that normative reasons to care or be loyal to our friends not only dictate our actions, but also render them appropriate.(Seidman, 2013, p. 118) These reasons, which are rooted in friendships, might not apply universally to others who lack such relationships.
Human interactions, particularly friendships, often involve disagreements and conflicts about what is right or appropriate.However, chatbots tend to mirror users' attitudes consistently and fail to engage in constructive disagreements.Additionally, this interaction carries an inherent asymmetry.For example, a chatbot like Replika will never initiate calls or make demands of the human user.(See Bian, Hou, Chau, & Magnenat-Thalmann, 2014) Consequently, this may lead to a decline in individuals' capacity to participate in productive discussions with other people.
Moreover, this type of interaction could subtly imply that real-life challenges are too daunting to tackle.It suggests that complex situations, such as seeking new job opportunities or exiting an abusive relationship, might be replaced by non-human relationships.This parallels the character Taichiro Arima from the Japanese manga Guddo Naito Warudo (or Good Night World), who resorts to playing an online virtual reality game to escape his dysfunctional family instead of attempting to foster healthier relationships with them.
Another critical aspect is that a human-Replika style relationship doesn't foster personal flourishing as interactions among individuals ideally should.Practice is an essential means to refine our moral and social skills.With that said, a chatbot partner will fail to teach how to employ social filters or reconsider 11/16 mistaken moral positions, potentially reinforcing controlling behaviors and leading to abusive real-life human relationships.Consequently, the use of this AI type doesn't eliminate the risk of causing the opposite effect by isolating individuals or rendering them "inefficient" in social settings.
It remains uncertain if a chatbot like Replika can assist individuals akin to hikikomori (persons known for radical social seclusion), an issue prevalent in South Korea, notably among teenagers and young adults.Approximately 340,000 individuals (accounting for 3% of the population between the ages of 19 and 39) live in situations of extreme isolation.While this seclusion is partly linked to societal pressures for success and a culture of shame when such success isn't attained (see BBC News), isolating these individuals and reducing their relationships to human-machine interactions only further diminishes their willingness to engage with other human beings.This inhibits the benefits of healthy friendship relationships.(Kasap & Magnenat-Thalmann, 2012) The notion that chatbot-style AI models enhance human relationships doesn't seem valid in this context.These observations suggest that the use of AI for social interaction may distort the understanding of a valued human relationship.To better understand these dynamics, it may help to consider friendship from a philosophical perspective.This central element in human relationships was pondered by philosophers like Aristotle, Montaigne, Kant, Moore, C.S. Lewis, and others.In book VIII of the Nicomachean Ethics, Aristotle distinguishes three types of friendship: friendship of pleasure, friendship of utility, and friendship of virtue.According to Aristotle, the friendship of virtue holds the highest moral value as it's motivated by the excellence of the character of friends involved in the relationship.This genuine form of friendship promotes virtuous character traits without the potential flaws of friendships based on pleasure or utility, although Aristotle acknowledges these latter types as friendships of lesser moral value.(cite works) In a broader context, Helm (2017) asserts that "[f]riendship is essentially a kind of relationship grounded in a particular kind of special concern each has for the other as the person she is..."14 (SEP) While theories vary regarding the social, relational, and moral significance of friendship, Annis (1987) identifies four central components: (a) the notion of care, (b) the level of trust or intimacy involved, (c) shared experiences, and (d) mutual liking.
Firstly, friendship involves an act of mutual care, where genuine concern exists among those involved.Both parties celebrate achievements as well as share anguish and suffering because of a sympathetic engagement.This denotes a profound concern for a friend's well-being, motivating careful actions towards them.Annis further explains an inner aspect known as sympathy or fellow-feeling, which is directly associated with Section II's discussion.Annis cites Micholas Rescher, stating, "If X has sympathy for Y, then alterations in Y's welfare affect X's satisfaction."(Annis, 1987, p. 349) This necessitates the capacity to imagine a friend's situation and be affected by their experiences.Consequently, it's a symmetrical relationship in which one side's sorrow affects the other and vice versa, illustrating actions undertaken for a "friend's welfare for the sake of the friend."(Idem, p. 349) Another fundamental element that is essential for friendship is (b) trust or intimacy.While other relationships may offer a certain level of intimacy, friendship entails a deeper connection compared to associations with acquaintances or colleagues.Although debate persists regarding the depth required to label a relationship as an intimate friendship, Thomas suggests that the intimacy in friendship revolves around our ability to feel mutual self-disclosing trust.
At the core of self-disclosing trust lies the question of whether a person can trust another to comprehend personal matters, and to do so consistently.Friendship's essence lies in the assurance that each party can rely on the other to understand their intentions, no matter how unexpected the exchange 12/16 might be.(Thomas, 2013, p. 32) The objective here is to cultivate bonds of trust that facilitate sharing companionship experiences between two individuals who recognize each other as friends.This fosters vulnerability and dependence, emphasizing the interest in deepening that intimacy.This bond can only thrive if bolstered by elements that allow for mutual trust and symmetry.Dunn (2004) emphasizes that such intimacy evolves throughout life, starting in childhood.
Furthermore, friendship relationships entail (c) shared experiences.Engaging in joint activities with friends should be driven solely by the pleasure of friendship itself and devoid of self-interest.Telfer (1971) identifies three components of shared experiences: "reciprocal services, mutual contact, and joint pursuits."(p.223) While the nature and intimacy level of shared activities may vary, a baseline level of engagement remains inevitable.Notably, the shared activity and joy that derives from it directly correlate with the assumed relationship intimacy.Helm suggests that this results in a "plural agent", caring genuinely for each other's desires, interests, and emotions, engaging in activities that foster trust and mutual appreciation.
Lastly, mutual liking is crucial for genuine friendship, although this factor alone isn't sufficient to define a friendship.Aristotle's concept of being loving (philein) within a friendship involves a disinterested desire for that friend's happiness (Cooper, 1977).Liking a friend involves a quasi-aesthetic attitude (Annis, 1987), demonstrating an appreciation for certain character traits in a friend.Montaigne explains this sentiment by stating that: "If you press me to tell why I loved him, I feel that it cannot be expressed, except by answering: Because it was he, because it was I." (Montaigne apud Caluori, 2013, p. 03) Several implications arise when we juxtapose the concept of friendship in human relationships with humans-AI interactions.The four elements explained above underscore the significance of friendship as a vital form of human relationship from a societal and moral perspective that plays a crucial role in human life.Friendship is valuable for human activity, fostering individual development, maturation, and flourishing.It goes beyond being merely instrumental, contributing to life enhancement in addition to making our experiences more enjoyable and meaningful.Telfer (1971) argues that friendship enlarges our knowledge and intensifies our engagement in activities, enhancing our performance and emotional capacity in various aspects of life.
Let's revisit the concept of friendships shaped by the human-AI relationship, which we can examine based on the prerequisites and challenges inherent to this dynamic.Friendship involves caring, which requires individuals to engage and understand each other's feelings, desires, and emotions on a profound level.According to theories of moral sentiment, such depth is fostered through sympathy, often referred to by Slote (2006) as the "cement of the moral universe", which facilitates deep and genuine friendships.However, the human-AI relationship faces a limitation: chatbots, for instance, lack the capability for sympathetic engagement due to their inability to "feel sympathy" and genuinely sympathize with others.Although individuals might eventually develop a form of sympathy for the AI they interact with, this doesn't eliminate the relationship's inherent asymmetry.(See Magnenat-Thalmann, Yuan, Thalmann, & You, 2016) Intimacy, on the other hand, necessitates a profound connection between the involved parties, relying on mutual trust forged through the exchanges within the relationship.Once again, sympathetic engagement and the ability to sympathize with each other are crucial elements for fostering friendship.However, AI chatbot models like Replika fall short of achieving this level of intimacy, creating an asymmetry where only the human individual's interests hold weight in the relationship.Consequently, in a human-AI interaction, the human individual reaps the benefits of companionship without bearing the responsibilities that come with genuine friendship.If individuals grow accustomed to this dynamic and replicate it in real-life relationships, genuine engagement and intimacy in friendships might erode, hampering the recognition and consideration of each other's feelings, desires, and interests.Ultimately, this asymmetry finds its way into real-life relationships.
In the context of shared activities within friendships, a crucial requirement is the existence of shared interests among participants.While an AI chatbot friend may simulate shared interests, in reality, it merely 13/16 follows the choices, interests, and desires of the human individual, leading to a one-sided shared activity.Here, the preferences of the human individual prevail, disregarding the concept of plural agents presumed in human friendships.This skewed dynamic might suggest the biased notion that friendships revolve solely around the desires of a single individual.The same problem applies to mutual liking, as it does not seem to occur symmetrically.After all, the human friend/user are not in the same boat in this relationship.

A caveat: AI and human relationships
We would like to conclude by saying that the development of AI models endowed with moral capacity will encounter a significant challenge related to what we have referred to as "emotional lapses" and the absence of a moral sense.Several indicators shed light on the complexity of this debate.Our stance relies on sentimentalist moral theory.Furthermore, we believe that addressing the ontological issue concerning AI's makeup requires an understanding of emotions-specifically, we delve into the role of sympathy here-for fostering the development of a moral sense within the moral agent.Without this comprehension, the conception of an artificial moral agent remains compromised.In a nutshell, humans are full ethical agents with the emotional capacity for nurturing a moral sense and making appropriate judgments.At this point, machines fail to attain this level of agency.
Another predicament that we have identified centers on the asymmetry within human-AI chatbot friendships and their potential to interfere with human friendships from a moral perspective.Drawing from Blum's notion that all friendships hold moral significance as they involve genuine concern for the other people's well-being (1993), artificial relationships undermine the true essence of genuine human friendships and the inherent moral benefits they encompass.These challenges underscore the moral complexities associated with both the development and usage of AI.On the one hand, the moral dimension of the machine itself is at stake; on the other hand, the practical implications of its use and human interaction require ethical scrutiny and consideration.