Healthcare is increasingly leveraging artificial intelligence to improve workflow management, patient communication, and diagnostic and treatment support. It is very important that these AI-based systems are not only efficient, but also efficient and privacy-preserving. With this in mind, we built and recently released the game Health AI developer fundamentals (HAI-DEF). HAI-DEF is a collection of lightweight, open models designed to provide developers with a solid starting point for their own health research and application development. Because HAI-DEF models are open, developers retain full control over privacy, infrastructure, and model modifications. IN Power This year, we expanded the HAI-DEF collection with MedGemma, a collection of generative models based on Gemma 3 that aim to accelerate the development of artificial intelligence in healthcare and life sciences.
Today we are proud to announce two new models in this collection. The first is MedGemma 27B Multimodal, which complements the previously released text-only 4B Multimodal and 27B models by adding support for complex multimodal and longitudinal interpretation of electronic medical records. The second new model is MedSigLIP, a lightweight image and text encoder for classification, search and related tasks. MedSigLIP is based on the same image encoder that powers the 4B and 27B MedGemma models.
MedGemma and MedSigLIP represent a strong starting point for medical research and product development. MedGemma is useful for medical text or imaging tasks that require generating free text, such as generating reports or answering questions visually. MedSigLIP is recommended for imaging tasks that require structured output, such as classification or search. All of the above models can be run on a single GPU, and MedGemma 4B and MedSigLIP can even be adapted to run on mobile hardware.
Full details on the development and evaluation of MedGemma and MedSigLIP can be found in MedGemma technical report.

















