Discovering the promise and risk of the future with more talented artificial intelligence
Imagine the future in which we regularly cooperate with many assistants of advanced artificial intelligence (AI) – and where millions of assistants cooperate with each other on our behalf. These experiences and interactions may soon become part of our daily reality.
Models of general purpose foundations pave the road to more and more advanced AI assistants. He can plan and perform a wide range of activities in accordance with the goals of a person, they can add great value to people and society, serving as creative partners, research analysts, educational teachers, life planners and many others.
They can also introduce a new phase of human interaction with AI. That is why it is so important to think about how this world can look and help manage responsible decisions and favorable results in advance.
Our new paper It is the first systematic treatment of ethical and social questions that advanced AI assistants raise for users, programmers and societies with whom they are integrated, and provides a significant new view of the potential impact of this technology.
We include topics such as equalization of values, security and improper use, impact on the economy, environment, information sphere, access and possibilities and many others.
This is the result of one of our greatest ethics projects so far. By combining a wide range of experts, we have examined and mapped a new technical and moral landscape of the future inhabited by AI assistants and we characterized the possibilities and risk that society may encounter. Here we present some of our key results.
A deep impact on users and society
Illustration of the potential of AI assistants to influence research, education, creative tasks and planning.
Advanced AI assistants can have a deep impact on users and society and be integrated with most aspects of people's life. For example, people can ask them to book holidays, manage social time or perform other life tasks. If they are implemented on a large scale, AI assistants can affect the approach to work, education, creative projects, hobby and social interactions.
Over time, AI assistants can also affect the goals that people strive and their personal development path through information and advice assistants and actions they take. Ultimately, this raises important questions about how people interact with this technology and how to best support their goals and aspirations.
Human alignment is necessary
Illustration showing that AI assistants should be able to understand human preferences and values.
AI assistants will probably have a significant level of autonomy to plan and perform sequences of tasks in various domains. For this reason, AI assistants present new challenges related to security, compensation and improper use.
With greater autonomy, there is a greater risk of accidents caused by unclear or incorrectly interpreted instructions and a greater risk of taking actions that are poorly aligned with the values and interests of the user.
More autonomous AI assistants can also allow highly influential forms of improper use, such as dissemination of disinformation or engaging in cyber attacks. To solve this potential risk, we claim that limits for this technology should be set and that the values of advanced AI assistants must better adapt to human values and be in line with wider ideals and social standards.
Communicating in natural language
Illustration of the AI assistant and a person communicating in a man -like manner.
You can smoothly communicate with the help of natural language, written performance and voices of advanced AI assistants can be difficult to distinguish from people's voices.
This development opens a complex set of questions about trust, privacy, anthropomorphism and appropriate interpersonal relationships with AI: How can we make sure that users can reliably identify AI assistants and maintain control over their interactions with them? What can you do so that users are not overly affecting or misleading over time?
Provide should be introduced, such as people around privacy to solve this risk. Importantly, the relationships of people with AI assistants must maintain the user's autonomy, support their ability to flourish, and not rely on emotional or material dependence.
Cooperation and coordination to meet human preferences
Illustration of how interactions between AI assistants and people will create various network effects.
If this technology becomes widely available and implemented on a scale, advanced AI assistants will have to interact, both users and people who are not users. To avoid collective problems with action, assistants must be able to work effectively.
For example, thousands of assistants can simultaneously book the same service for their users – potentially promoting the system. In an ideal scenario, instead, these assistants would coordinate on behalf of human users and service providers involved in discovering a common plane that better meets the preferences and needs of various people.
Considering how useful this technology can become, it is also important that no one is excluded. AI assistants should be basically available and designed for the needs of various users and non -users.
More grades and predictions are needed
Illustration of how grades at many levels are important for understanding AI assistants.
AI assistants can display new possibilities and use tools in a new way that are difficult to predict, which makes it difficult to predict the risk associated with their implementation. To help manage such risk, we must get involved in predictions based on comprehensive tests and assessment.
Our previous studies on the assessment of social and ethical risk by generative artificial intelligence identified some gaps in traditional methods of assessing the model and we encourage much more research in this space.
For example, comprehensive assessments that relate to the impact of both interaction between a person-computers and a broader impact on society can help researchers understand how AI assistants interact with users, not users and society within a wider network. In turn, these observations can inform about better allevators and responsible decision making.
Building the future we want
Perhaps we are dealing with a new era of technological and social transformation inspired by the development of advanced AI assistants. The elections that we make today, as scientists, programmers, decision -makers and members of society will manage the development of this technology and the implementation in the whole society.
We hope that our article will act as a stepping stone for further coordination and cooperation to collectively shape the type of favorable AI assistants, which we would all like to see in the world.
Paper authors: Iason Gabriel, Arianna Manzini, Geoff Keeling, Lisa Anne Hendricks, Verena Rieser, Hasan Iqbal, Nenad Tomašev, Ira Ktena, Zachary Kenton, Mikel Rodriguez, Selie El-Sayed, Sasha Brown, Canfer Akbut, Andrew Trash, Edard Hughes, A. Stevies, Renere, Renere. Shelby, Nahema Marchal, Conor Griffin, Juan Mateos-Garcia, Laura Weidinger, Winnie Street, Benjamin Lange, Alex Eingman, Alison Lentz, Reed Enger, Andrew Barakat, Victoria Krakovna, John Oliver Siy, Zeb Kurth-Nelson Shanahan, Lize Alberts Sarah de Haas, Yetunde Ibitoye, Allan Dafoe, Beth Goldberg, Sébastien Krier, Alexander Reese, Sims Witherspoon, Will Hawkins, Maribeth Rauh, Don Wallace, Matija Franklin, Josh A. Goldstein, Joel Lehman, Michael, Klen Vallor, Don Wallace, Biles, Meredith Ringel Morris, Helen King, Blaise Agüra y Arcas, William Isaac and James Manyika.