Is the use of the robot universal or culturally dependent?

According to the new study, people in Japan treat cooperating artificial agents with the same level of respect as people, while Americans use artificial more often Published in Scientific reports by scientists from LMI Munich and Wassa University Tokyo.

As self -propelled and other vehicles And autonomous works Increasingly integrated with everyday life, cultural attitudes towards artificial agents can determine how quickly and effectively these technologies are implemented in various societies.

Cultural division in human cooperation

“Since independent technology becomes reality, these daily meetings will define the way we share the path with intelligent machines,” said Dr. Jurgis Karpus, the main researcher from Lum Munich.

The study is one of the first comprehensive intercultural research on how people interact with artificial agents in scenarios in which interests can not always be even. The discoveries undermine the assumption that the use of the algorithm – a tendency to use AI's cooperation – is a common phenomenon.

The results suggest that as autonomous technologies become more common, societies may experience various challenges related to integration based on cultural attitudes towards artificial intelligence.

Research methodology: Game theory reveals behavioral differences

The research team applied classic experiments of behavioral economics – Trust game and Prisoner's dilemma– To compare how participants from Japan and the United States influenced both human partners and AI systems.

In these games, participants made a choice between their own interest and mutual benefits, with real monetary incentives to make sure they make real decisions, not hypothetical. This experimental project allowed researchers to directly compare the way the participants of people treat people compared to AI in identical scenarios.

The games were carefully organized to repeat everyday situations, including traffic scenarios in which people must decide whether to cooperate with another agent. Participants played in many rounds, sometimes with human partners, and sometimes with AI systems, enabling a direct comparison of their behavior.

“Our participants of the United States cooperated with artificial agents much less than with people, while participants in Japan showed equivalent levels of cooperation with both types of coexisting,” says the article.

Karpus, J., Shirai, R., Verba, JT et al.

Wine as a key factor of cultural differences

Scientists suggest that the differences in experienced guilt are the main motor of the observed cultural variability, in how people treat artificial means.

The study showed that people in the West, especially in the United States, tend to remorse when they use another man, but not during the operation of the machine. In Japan, however, people seem to experience a sense of guilt similarly, regardless of whether they treat a person or an artificial agent badly.

Dr Karpus explains that in Western thinking, cutting off the robot in motion will not hurt his feelings, emphasizing the perspective that can contribute to greater readiness to use machines.

The study included an exploration element in which participants reported their emotional reactions after revealing the results of the game. These data provided key information on psychological mechanisms underlying behavioral differences.

Emotional reactions reveal deeper cultural patterns

When the participants used the cooperating artificial intelligence, Japanese participants reported that they felt much more negative emotions (guilt, anger, disappointment) and less positive emotions (happiness, victory, relief) compared to their American counterparts.

Studies have shown that the escapers who used their interacting AI in Japan reported that they feel much more guilty than defects in the United States. This stronger emotional reaction can explain more reluctance among Japanese participants to use artificial agents.

And vice versa, the Americans felt more negative emotions when using people than AI, a distinction not observed among Japanese participants. For people in Japan, the emotional reaction was similar, regardless of whether they used a man or an artificial agent.

The study notes that Japanese participants felt similar about using both cooperating people and AI in all emotions surveyed, suggesting a fundamentally different moral perception of artificial agents compared to Western attitudes.

Animism and perception of robots

The cultural and historical origin of Japan can play a significant role in these discoveries, offering potential explanations of observed differences in behavior towards artificial agents and embodied AI.

The article notes that the historical affinity of Japan animism And the belief that not living objects may have souls in Buddhism, led to the assumption that the Japanese are more acceptable to robots than individuals in other cultures.

This cultural context can create a basically different starting point for how artificial agents are perceived. In Japan, there may be a smaller difference between people and non-human units capable of interaction.

Studies indicate that people in Japan more often than people in the United States believe that robots may experience emotions and are more likely to accept robots as the goals of a human moral judgment.

The studies referred to in the article suggest a greater tendency in Japan to perceive artificial means as similar to people, with robots and people often presented as partners, and not in hierarchical relations. This perspective can explain why Japanese participants treated emotionally artificial agents and people with similar attention.

Implications for autonomous technology acceptance

These cultural attitudes can directly affect how fast autonomous technologies are taken in different regions, with potentially far -reaching economic and social implications.

Dr Karpus supposes that if people in Japan treat robots with the same respect as people, fully autonomous taxi can become common in Tokyo faster than in Western cities such as Berlin, London or New York.

The desire to use autonomous vehicles in some cultures can be practical challenges for their smooth integration with society. If drivers more often cut off self -propelled cars, take their priority or use the programmed caution in a different way, it may make it difficult to efficiency and safety of these systems.

Scientists suggest that these cultural differences can significantly affect the schedule of universal technology acceptance, such as vans, autonomous public transport and independent personal vehicles.

Interestingly, the study found no difference in how Japanese and American participants cooperated with other people, adapting to previous research in behavioral economics.

The study observed a limited difference in the readiness of Japanese and American participants to cooperate with other people. This discovery emphasizes that the discrepancy is created specifically in the context of human-Ai interaction, and does not reflect wider cultural differences in cooperative behavior.

This coherence in human-human cooperation is an important level of reference, against which to measure cultural differences in human interaction-Ai interaction, strengthening the conclusions of the research on the uniqueness of the observed pattern.

Wider implications for the development of AI

Discoveries have significant implications for the development and implementation of AI systems designed for interaction with people in various cultural contexts.

The study emphasizes the critical need to consider cultural factors in the design and implementation of AI systems that interact with people. The way people perceive and interact with AI are not universal and may vary significantly depending on the cultures.

Ignoring these cultural nuances can lead to unintentional consequences, slower adoption indicators and the potential of improper use or use of AI technology in some regions. It emphasizes the importance of intercultural research in understanding the interaction of man-Ai and ensuring responsible development and distribution of artificial intelligence around the world.

Scientists suggest that when AI becomes more integrated with everyday life, understanding these cultural differences will become more and more important for successful implementation of technologies that require cooperation between people and artificial agents.

Restrictions and future research directions

Scientists recognize some restrictions in their work that indicate the directions of future investigation.

The study focused primarily on two Japan countries and the United States, while providing valuable observations, may not capture the full spectrum of cultural variability in human interaction-Ai interaction around the world. Further research is necessary in a broader range of cultures to generalize these findings.

In addition, while experiments in the theory of game theory ensure controlled scenarios ideal for comparative research, they may not fully capture the complexity of human interactions-ai. Scientists suggest that the validation of these findings in field research by means of actual autonomous technologies would be an important next step.

An explanation based on guilt and cultural beliefs about robots, although supported by data, requires further empirical examination to ultimately determine the causality. Scientists call for more targeted research examining specific psychological mechanisms underlying these cultural differences.

“Our current findings develop a generalization of these results and show that the use of an algorithm is not an intercultural phenomenon,” sums up scientists.

LEAVE A REPLY

Please enter your comment!
Please enter your name here