Developing reliable AI tools for healthcare

New research proposes a system of determining the relative accuracy of the predictive artificial intelligence in a hypothetical medical environment, and when the system should postpone the clinicist

Artificial intelligence (AI) has great potential to increase the way people work in various industries. But to integrate AI tools with the workplace in a safe and responsible way, we must develop more reliable understanding methods when they can be the most useful.

So when is AI more accurate and when is a man? This question is particularly important in healthcare, in which ai predictive is increasingly used in the tasks of a high rate to help clinicians.

Today in Nature MedicineWe published our joint article from Google Research, which is proposed by CODOC (based on complementarity, the flow of postponement to the clinic), the AI ​​system, which learns when to rely on predictive AI tools or postponement of a clinician to interpret the most accurate interpretation of medical images.

CODOC examines how we can use man's cooperation in hypothetical medical conditions to ensure the best results. In one of the sample scenarios, Codoc reduced the number of false positives by 25% for a large, identified mammography set in Great Britain, compared to the commonly used clinical work flows-the lack of real positives.

This work is cooperation with several healthcare organizations, including the United Nations Office for STOP TB Partnership. To help scientists build our work to improve the transparency and safety of AI models for the real world CODOC CODE FOR GITHUB.

CODOC: An additional tool for cooperation between man-ai

Building more reliable AI models often requires re -design of complex internal predictive activities of AI models. However, in the case of many healthcare providers, it is simply not possible to redesign the predictive AI model. CODOC can potentially help improve the predictive AI tools for its users without having to modify the AI ​​tool itself.

During the development of CODOC we had three criteria:

  • Non-Maszyna experts, like service providers, should be able to implement the system and run it on one computer.
  • Training would require a relatively small amount of data – usually just a few hundred examples.
  • The system can be compatible with all reserved AI models and would not need access to internal activities or data in which it has been trained.

Determining when the predictive artificial intelligence or clinician is more accurate

Thanks to CODOC, we suggest a simple and useful AI system to improve reliability, helping predictive AI systems in “know when they don't know”. We looked at the scenarios in which the clinician can have access to the AI ​​tool designed to help in interpreting the image, for example, testing of the chest x -ray to obtain a tuberculosis test.

In each theoretical clinical environment, the CODOC system requires only three input data for each case in the set of training data.

  1. The AI ​​predictor gives a assessment of trust between 0 (a certain disease does not occur) and 1 (sure that the disease is present).
  2. Interpretation of the medical image by a clinicist.
  3. Dead truth about whether the disease was present, such as determined by a biopsy or other clinical observations.

Note: CODOC does not require access to any medical images.

CODOC learns to determine the relative accuracy of the predictive AI model compared to the interpretation of clinicians and how this relationship changes with the results of a and predictive trust.

After training, CODOC can be put into the hypothetical future flow of clinical work covering both AI and the clinicist. When the patient's new image is assessed by the predictive AI model, the trust -related result of it is transmitted to the system. Then Codoc assesses whether the adoption of AI's decision or postponement of the clinician will ultimately cause the most accurate interpretation.

Increased accuracy and efficiency

Our comprehensive CODOC tests with many real data sets-in this only historical and identified data-showed that the combination of the best human knowledge and predictive artificial intelligence causes greater accuracy than just one of them.

In addition to achieving a 25% reduction in false positives for a set of mammography data, in hypothetical simulations in which AI could act autonomously in some cases, Codoc was able to reduce the number of cases that doctors must read by the clinicist by two -thirds. We also showed how Codoc can hypothetically improve the segregation of x -rays of the chest for a further tuberculosis test.

Responsibly developing artificial intelligence for healthcare

Although this work is theoretical, it shows the potential of our AI system to adapt: ​​CODOC was able to improve the efficiency of medical imaging interpretation in various demographic populations, clinical conditions, used equipment for medical imaging and types of diseases.

CODOC is a promising example of how we can use the benefits of artificial intelligence in combination with strengths and knowledge. We work with external partners to strictly assess our research and potential benefits of the system. To safely introduce technology such as CODOC into real medical environments, healthcare providers and health care producers will also have to understand how clinicians interact differently with AI, and confirm systems with specific AI tools and settings.

Learn more about CODOC:

LEAVE A REPLY

Please enter your comment!
Please enter your name here