Adding regions of interest in medical images, a process known as segmentation, is often one of the first clinical steps of researchers while conducting a new study covering biomedical images.
For example, to determine how the size of the hippocampus of the brain changes as the patients are aged, scientists first present each hippocampus in a number of brain scans. In the case of many structures and types of paintings, this is often a manual process, which can be extremely time -consuming, especially if the studied regions are difficult to determine.
To improve the process, the MIT researchers have developed an artificial intelligence system that allows the researcher to quickly segment new data sets from biomedical imaging by clicking, writing and drawing a field in the images. This new AI model uses these interactions to predict segmentation.
When the user sets additional images, the number of interactions they need to perform, decreases, ultimately falling to zero. The model can then thoroughly divide each new image without the user's entry.
This can do this because the architecture of the model has been specially designed to use information from images, which it has already divided to make new forecasts.
Unlike other models of segmentation of medical images, this system allows the user to segment a whole set of data without repeating work for each image.
In addition, an interactive tool does not require a preliminary set of image data for training, so users do not need specialist knowledge for machine learning or extensive computing resources. They can use the system for a new segmentation task without retraining the model.
In the long run, this tool can speed up new treatment methods and reduce the costs of clinical and medical research. Doctors can also be used to improve the efficiency of clinical applications, such as planning radiation therapy.
“Many scientists may have time for segments of several photos per day for research, because manual segmentation of images is so time consuming. We hope that this system will allow new learning, enabling clinical researchers to conduct, which was not previously performed due to the lack of an effective tool,” says Hallee Wong paper with this new tool.
It is joined by the article by Jose Javier Gonzalez Ortiz Phd '24; John Guttag, Dugald C. Jackson professor of computer science and electrical engineering; and senior author Adrian Dalca, a professor assistant at Harvard Medical School and MGH and a scientist at Mit Computer Science and Artificial Intelligence Laboratory (CSAIL). The research will be presented at the international Computer Vision conference.
Improved segmentation
First of all, there are two methods that scientists use for segmentation of new sets of medical images. Thanks to interactive segmentation, they introduce an image into the AI ​​system and use the interface to mark interest areas. The model provides for segmentation based on these interactions.
The tool previously developed by MIT researchers, ScribblePT, allows users to do so, but they must repeat the process for each new image.
Another approach is to develop an AI model for a task to automatically segment images. This approach requires a manual segment of hundreds of images from the user to create a set of data and then train the machine learning model. This model provides for the segmentation of a new image. But the user must run from the scratch of the complex, based on machine learning for each new task and there is no way to correct the model if he makes a mistake.
This new system, MultiversegIt connects the best of every approach. It provides segmentation of a new image based on user interactions, such as the scribbler, but also maintains each segmented image in a contextual set, to which it relates later.
When the user sends a new image and means areas of interest, the model is based on examples in his contextual set to make a more accurate forecast, with a smaller user entry.
Scientists have designed the architecture of the model to use the contextual set of any size, so the user does not have to have a number of images. This gives Multiverseg flexibility for use in various applications.
“At some point, for many tasks, you should not provide any interactions. If you have enough examples in the contextual set, the model can accurately predict segmentation yourself,” says Wong.
Researchers carefully designed and trained a model with a variety of biomedical image picture data to make sure that the possibility of gradually improving their forecasts based on the user's entry.
The user does not have to retrain or adjust the model of his data. To use MultiversEG for a new task, you can send a new medical picture and start marking it.
When scientists compared Multiverseg with the latest tools for contact and interactive image segmentation, they exceeded each base line.
Less clicks, better results
Unlike these other tools, Multiverseg requires a smaller user entry with each image. In the ninth new image he only needed two clicks from the user to generate segmentation more accurate than a model designed especially for the task.
For some types of images, such as X -rays, the user may only be necessary to segment one or two images manually before the model becomes accurate enough to make forecasts.
The interactivity of the tool also allows the user to make amendment to the model, and items until he reaches the desired level of accuracy. Compared to the previous system of scientists, Multiverseg reached 90 percent accuracy with about 2/3 of the number of scribbles and 3/4 of the number of clicks.
“Thanks to Multiverseg, users can always provide more interactions to improve artificial intelligence forecasts. This still dramatically accelerates the process, because it is usually faster to correct something that exists than to start from scratch,” says Wong.
Going further, scientists want to test this tool in real situations with clinical colleagues and improve it based on feedback from users. They also want to enable the Biomedical Images 3D Multiverseg segment.
These works are partly served by Quanta Computer, Inc. and National Institutes of Health, with hardware support Massachusetts Life Sciences Center.


















