Teaching AI models what they don't know Myth news

Artificial intelligence systems, such as chatgpt, provide likely, sounding answers to every question that you can ask. But they do not always reveal gaps in their knowledge or areas in which they are uncertain. This problem can have huge consequences, because AI systems are increasingly used to create drugs, information synthesis and driving autonomous cars.

Now the myth of Spinout Themis AI helps quantify the uncertainty of the model and improve the results before causing more problems. The company's Caps platform can work with any machine learning model to detect and improve incredible results in a few seconds. It works by modifying AI models to enable them to detect patterns in data processing that indicate the ambiguity, incompleteness or prejudice.

“The point is to take the model, wrap it in Caps, identify the modes of uncertainty and model of the model, and then improve the model,” says co -founder AI and Professor Mit Daniel Rus, who is also the director of Mit Computer Science and Artificial Intelligence Laboratory (CSSail). “We are excited to offer solutions that can improve models and offer guarantees that the model works correctly.”

Rus founded Themis AI in 2021 with Aleksander Amini '17, SM '18, PhD '22 and Elaheh Ahmadi '20, Meng '21, two former research entities in their laboratory. Since then, they have helped telecommunications companies in planning and automating of the network, helped oil and gas companies use AI to understand seismic photos and published articles on the development of more reliable and trustworthy chatbot.

“We want to include artificial intelligence in the applications of the highest rates in every industry,” says Amini. “We've all seen examples of hallucinations or mistakes. Because AI is implemented more broadly, these errors can lead to destructive consequences. Our software can make these systems more transparent.”

Helping models know what they don't know

The Rus laboratory has been investigating the uncertainty of the model for years. In 2018, she received funds from Toyota to examine the reliability of autonomous driving -based driving.

“This is a critical context of security in which understanding the reliability of the model is very important,” says Rus.

In separate WorkRus, Amini and their colleagues have built an algorithm that could detect racial and sexual bias in face recognition systems and automatically evaluate the model's training data again, which shows that it eliminated bias. The algorithm worked, identifying unrepresentative parts of basic training data and generating new, similar data samples to be re -balanced.

In 2021, the final co -founders showed A similar approach You can use to help pharmaceutical companies to use AI models to predict the properties of drug candidates. They founded Themis AI later the same year.

“Drug discovery can potentially save a lot of money,” says Rus. “It was a case of use that made us realize how powerful this tool could be.”

Today, Themis cooperates with companies in many different industries, and many of these companies are building large language models. Using Caps, models are able to quantify their own uncertainty for each exit.

“Many companies are interested in using LLM based on their data, but they are worried about reliability,” notes Stewart Jamieson SM '20, dr '24, head of the technology of Thei AI. “We help LLMS to report their confidence and uncertainty, which allows more reliable answer to questions and marking incredible results.”

Themis AI also conducts discussions with semiconductor companies building AI solutions on its systems that can operate outside the cloud environment.

“Usually, those smaller models that work on phones or built -in systems are not very accurate compared to what can be operated on the server, but we can get the best of both worlds: efficient low delay edge processing, efficient edge processing without quality dedication,” explains Jamieson. “We see the future in which Edge devices do most of the work, but each time they are not sure of their outputs, they can transfer these tasks to the central server.”

Pharmaceutical companies can also use a capsa to improve AI models used to identify candidates for medicines and predict their performance in clinical trials.

“The forecasts and results of these models are very complex and difficult to interpret – experts spend a lot of time and effort, trying to understand them,” notes Amin. “Caps can give insight from the gate to understand whether the forecasts are supported by evidence in the training set or are only speculations without many grounding. This can speed up the identification of the strongest forecasts, and we think it has great potential for social good.”

Impact research

Theis AI thinks that the company is well prepared to improve the latest evolution of AI technology. For example, the company examines Caps's ability to improve the accuracy of AI technique known as chain reasoning, in which LLM explains the steps they take to get an answer.

“We have seen Caps's signs can help in conducting these reasoning processes to identify reasoning chains of the highest certainty,” says Amini. “We believe that this has huge consequences in improving LLM experience, reducing delays and reducing computing requirements. This is an extremely high impact for us.”

For Rus, which has co -founded several companies since arriving in the myth, Themis AI is an opportunity to ensure the influence of her MIT research.

“My students and I become more and more passionate about going to go an additional step so that our work is important for the world,” says Rus. “AI has great potential to transform industries, but artificial intelligence also raises concerns. What excites me is the possibility of developing technical solutions that concern these challenges, as well as building trust and understanding between people and technologies that become part of their daily lives.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here