When it comes to science, people's systems and artificial intelligence (AI) have a common challenge: how to forget about information that they should not know. In the case of rapidly developing AI programs, especially those trained in the field of extensive data sets, this problem becomes critical. Imagine a model of artificial intelligence, which unintentionally generates content using copyright protected materials or violence – such situations can lead to legal complications and ethical problems.
Scientists from the University of Texas in Austin solved this problem using a breakthrough concept: the “university” machine. In the last study, a team of scientists under the leadership of Radu Marculescu has developed a method that allows generative AI models to selectively forget about problematic content without rejecting the entire knowledge base.
At the basis of their research there are image models for image, capable of transforming input images based on contextual instructions. The innovative algorithm “unleashed” equips these models with the possibility of removing the content contained without surrendering to extensive retraining. Human moderators supervise the removal of content, providing an additional layer of supervision and response to user feedback.
While machine university was traditionally used for classification models, its adaptation to generative models is the emerging border. Generative models, especially those regarding image processing, present unique challenges. Unlike classifiers that make discrete decisions, generative models create rich, continuous results. Ensuring that they unleashed certain aspects without prejudice to their creative abilities is a delicate equivalent act.
As the next step, scientists plan to examine use to other modalities, especially in the case of text models for the image. Scientists also intend to develop more practical reference points related to the control of the created content and protect data privacy.
You can read a full study in an article published on the topic Server Arxiv Preprint.
As the machine's “unleashed” the machine evolutions, the machine will play an increasingly important role. It authorizes AI systems to move on a thin border between maintaining knowledge and generating responsible content. Taking into account human supervision and selectively forgetting about problematic content, we approach AI models that learn, adapt and respect legal and ethical boundaries.