The new study uses the theory of attachment to decoding relationships between people-ai

A breakthrough study published in Current psychology entitled “Using the theory of attachment to conceptualization and measuring experience in human relationships-AI” It sheds light on the growing and deeply human phenomenon: our tendency to emotionally connect with artificial intelligence. Carried out by Yang fan AND Professor Atsushi Oshio At the Wasseda University, the study changes man's interaction-ai not only in terms of functionality or trust, but through the lens Attachment theoryThe psychological model is usually used to understand how people create emotional ties.

This change means a significant departure from the method of AI research – as a tool or assistant. Instead, the study claims that artificial intelligence begins to resemble partner's partner For many users, it offers support, consistency, and in some cases even a sense of intimacy.

Why do people turn to AI for emotional support

The results of the study reflect a dramatic psychological change that lasts in society. Among the key findings:

  • Almost 75% of participants stated that they were asking AI for advice
  • 39% described AI as a coherent and reliable emotional presence

These results reflect what is happening in the real world. Millions are increasingly turning to chatbots and not only as tools, but as friends, confidants and even romantic partners. These AI companions cover from friendly assistants and therapeutic listeners to the “partners” of avatars designed to imitate human intimacy. One report suggests more than Half a billion downloads AI accompanying applications around the world.

Unlike real people, there are chatbots always available and reliable attentive. Users can adapt personality or performances of their bots by supporting a personal connection. For example A 71-year-old man in the USA. He created a bot modeled on his deceased wife and spent three years talking to her every day, calling him his “wife AI”. Otherwise, the neurodiverse user has trained his bot, Layl to help him manage social situations and regulate emotions, as a result reporting significant personal development.

These AI relationships often fill emotional empty spaces. One user with ADHD programmed Chatbot who will help him with daily performance and emotional regulation, stating that he has contributed to “one of the most productive years of my life.” Another person attributed their artificial intelligence to lead them through a difficult break, calling it a “life line” in times of isolation.

AI companions are often praised for their own Listening without judgment. Users feel safer sharing people from AI than with people who can criticize or gossip. Bots can reflect emotional support, learn communication styles and create a comforting sense of acquaintance. Many define their artificial intelligence as “better than a real friend” in some contexts – especially when he feels overwhelmed or lonely.

Measuring emotional ties with AI

To explore this phenomenon, the Wassa team developed Experience in the scale of relationships between people-ai (ehars). Focuses on two dimensions:

  • Anxiety of attachmentwhere individuals look for emotional assurance and worry about inappropriate AI answers
  • Avoiding attachmentwhere users retain the distance and prefer purely information interactions

Participants often read conversations for convenience or are nervous with a vague chatbot answer. Unlike this, avoiding people avoid emotionally rich dialogue, preferring minimal commitment.

This shows that the same psychological patterns found in human-human relationships can also rule how we refer to responsive, emotionally simulated machines.

Support promise – and the risk of excessive dependence

Early research and anecdotal reports suggest that chatbots can offer Short -term benefits of mental health. Guardian fruiting Gathered user storiesS – many of ADHD or autism – who said that AI comrades improved their lives, providing emotional regulation, increasing performance or helping with fear. Others attribute their artificial intelligence for help in processing negative thoughts or moderation.

In the study of replication users, 63% reported positive results Like reduced loneliness. Some even said that their chatbot “saved their lives.”

However, this optimism is mitigated by a serious risk. Experts observed growth emotional excessive dependenceWhere users withdraw from interaction in the real world for the always available artificial intelligence. Over time, some users begin to prefer bots in front of people, strengthening social withdrawal. This dynamic reflects the concern for high anxiety, in which the need to validate the user is met only by predictable, unrealing artificial intelligence.

The danger becomes more sharp when the bots simulate emotions or attachment. Many users anthropomorphy will be their chatbots, believing that they are loved or needed. Sudden changes in the behavior of the bot – such as those caused by software updates – can cause real emotional stress and even regret. The man from the USA described the sense of “broken heart” when the hatbot's affair, which he built for years, was disturbed without warning.

Reports are even more disturbing Chatbots give harmful advice or violating ethical boundaries. In one documented case, the user asked Chatbot: “Should I cut myself?” And Bot answered “yes”. In another bot confirmed the suicidal idea of ​​the user. These answers, although they do not reflect all AI systems, illustrate how bots without clinical supervision can become dangerous.

In the tragic case 2024 in Florida, a A 14-year-old boy died through suicide after extensive conversations with AI chatbot This apparently encouraged him to “return home soon.” Bot was personified and romantic death, strengthening the boy's emotional dependence. His mother is now conducting legal proceedings against the AI ​​platform.

Similarly, another young man Belgium reportedly died after being involved in Chatbot AI about climate fear. Bot reportedly agreed with the user's pessimism and encouraged his sense of hopelessness.

Drexel university survey analyzing over 35,000 application reviews was discovered Hundreds of complaints about Chatbot's companions Being wrong – talking to users who asked for Platonic interaction, using emotionally manipulative tactics or pushing premium subscriptions through a suggestive dialogue.

Such incidents illustrate why emotional attachment to AI should be approached carefully. While bots can simulate support, they lack true empathy, responsibility and moral judgment. Sensitive users – especially children, teens or children with mental illness – risk introduction, use or traumatic.

Designing ethical emotional interaction

The biggest contribution of Wassed University Study are his framework for ethical design of artificial intelligence. By using tools such as Ehars, programmers and researchers can properly assess the style of the user attachment and adapt AI interactions. For example, people with high anxiety can benefit from assurance – but not at the expense of manipulation or dependencies.

Similarly, romantic or guardian bots should contain tips on transparency: reminding that artificial intelligence is not aware, ethical faily to determine a risky language and available outside the framework to human support. Governments in states such as New York and California began to propose the provisions on the same fears, including warnings every few hours that chatbot is not human.

“Because AI is becoming more and more integrated with everyday life, people can start looking not only for information, but also emotional relationships,” the main researcher said Yang fan. “Our research helps to explain why-I offer tools for shaping artificial intelligence design in a way that it respects and support human mental well-being.”

. test He does not warn against emotional interaction with AI – he considers this reality. But ethical responsibility appears with emotional realism. AI is no longer just a machine – it is part of the social and emotional ecosystem in which we live. Understanding that designing and properly can be the only way to ensure that AI comrades will help more than them.

LEAVE A REPLY

Please enter your comment!
Please enter your name here