Helmholtz AI Conference 2025

TerraMind: Large-Scale Generative Multimodality for Earth Observation
Authors
J. Jakubik, B. Blumenstiell, N. Kopp, T. Brunschwiler, J. Bernabe Moreno
Abstract
Unlike other multimodal models, TerraMind is pretrained on dual-scale representations combining both token-level and pixel-level data across modalities. On a token level, TerraMind encodes high-level contextual information to learn cross-modal relationships, while on a pixel level, TerraMind leverages fine-grained representations to capture critical spatial nuances. We pretrained TerraMind on nine geospatial modalities of a global, large-scale dataset. In this paper, we demonstrate that (i) TerraMind's dual-scale early fusion approach unlocks a range of zero- shot and few-shot applications for Earth observation, (ii) TerraMind introduces "thinking in modalities" (TiM) -the capability of generating additional artificial data during finetuning and inference to improve the model output-and (iii) TerraMind achieves beyond state-of-the-art performance in community-standard benchmarks for EO like PANGAEA. The pretraining dataset, the model weights, and our code will be open-sourced under a permissive license.