Development of new generation AI agents, study of new methods and pioneering funding
Next week, AI researchers from around the world will coincide at 12. International conference on the representation of learning (ICLR), it is to take place on May 7-11 in Vienna in Austria.
Raia Hadsell, Vice President for Research at Google Deepmind, will give a speech reflecting the last 20 years in this field, emphasizing how the drawn conclusions shape the future of artificial intelligence for humanity.
We will also offer live demonstrations, showing how we introduce our fundamental research in reality, from the development of robotics transformers to creating sets of open source tools and models, such as Donut.
Teams from all over Google Deepmind will present over 70 articles this year. Some most important studies:
Anskints of problem solving and approach inspired by man
Large language models (LLM) will already revolutionize advanced AI tools, but their full potential remains unused. For example, AI agents based on LLM capable of taking effective action can transform digital assistants into more helpful and intuitive AI tools.
AI assistants who follow natural language instructions to perform internet tasks on behalf of people would be a huge number of time. We introduce in the oral presentation WebagentAn agent powered by LLM, who learns on the basis of his own experience, navigation and management of complex tasks on websites in the real world.
To further increase the overall usefulness of LLM, we focused on increasing their problems solving. We show how we achieved this by equipping the LLM -based system with a traditionally human approach: Production and use of “tools”. We present a training technique separately, which ensures that language models produce more consistently Socially permissible outputs. Our approach Uses a sandbox sample space that represents the values of society.
Pushing borders in vision and coding
Our model of the dynamic stage transformer (distance) uses real films with a single camera to extract representations of 3D facilities on the stage and their movements.
Until recently, large AI models focused mainly on texts and paintings, putting the ground for large -scale recognition and data interpretation. Now the field is developing beyond these static kingdoms to adopt the dynamics of real visual environments. As computing progresses around the world, it is becoming more and more important that its basic code is generated and optimized with maximum efficiency.
When watching video on a flat screen, you intuitively understand the three -dimensional nature of the stage. However, the machines are fighting to imitate this skill without clear supervision. We present ours Dynamic stage transformer (Dist) Model that uses real films with a single camera to separate the representations of 3D facilities on the stage and their movements. What's more, DRA also allows you to generate innovative versions of the same film, with user control over the angles and the content of the camera.
Emulation of human cognitive strategies also provides better AI code generators. When developers write a complex code, they usually “spread” the task into a simpler subtly. WITH ExedecWe are introducing a new approach generating the code that uses the scheduled approach to raising the programming and generalization of AI systems.
In parallel Headlight paper We examine the innovative use of machine learning in order not only to generate code, but also for its optimization, introducing it Data set for a solid comparative code of code performance. Code optimization is difficult, requiring complex reasoning, and our data set allows you to examine a number of ML techniques. We show that the resulting learning strategies exceed the optimization of human code.
Exedec introduces a new approach generating the code that uses the approach to decomposition to increase programming and generalizing AI systems
Pampering fundamental learning
Our research teams deal with great questions AI – from the study of the essence of learning machines to understand how advanced AI models generalize – at the same time they work on overcoming key theoretical challenges.
Both in the case of people and machines, causal reasoning and the ability to predict events are closely related concepts. In the presentation in the spotlight, we examine how Training goals based on forecasts are influenced by the reinforcement learningAnd draw similarities to changes in brain activity also related to the forecast.
When AI agents are able to generalize well for new scenarios, it is because they, like people, have learned the basic causal model of their world? This is a critical question in advanced artificial intelligence. In an oral presentation, we reveal that such models I actually learned the approximate causal model The processes that caused their training data and discuss deep implications.
Another critical question in AI is trust, which partly depends on how exactly the models can estimate the uncertainty of their results – a key factor in reliable decision making. We did Significant progress in estimating uncertainty in the deep learning of BayesApplication of a simple and fundamentally free method.
Finally, we explore Nash Equilibrium (ne) of the game theory (ne) – a state in which no player uses a change of strategy if others keep their own. In addition to the simple games of two players, even the approximation of Nash's balance is not related to calculations, but in the oral presentation we reveal new most modern approaches in negotiating contracts from poker to auction.
Connecting the AI community
We are glad that I can sponsor ICLR and support initiatives, including Queer in AI AND Machine learning women. Such partnerships not only strengthen research cooperation, but also promote vibrant, diverse community in artificial intelligence and machine learning.
If you are in iClr, visit our stand and ours Google Research Neighborhood colleagues. Discover our pioneering research, meet our teams organizing workshops and cooperate with our experts presenting during the conference. We are looking forward to connecting with you!