Researchers have developed a novel deep-learning architecture that generates high-quality synthetic images of rare land covers to eliminate bias and drastically improve the accuracy of environmental monitoring systems.

New highly accurate AI model for mapping rare landscapes

Hyderabad
Representative image. Image credits: Safety track solutions

Researchers have developed a new artificial intelligence model designed to solve one of the most persistent problems in satellite mapping: the invisibility of rare landscapes. Researchers from the National Remote Sensing Centre at ISRO, and GGS Indraprastha University have developed a system called MO-DGAN that creates hyper-realistic synthetic images to help computers better recognise underrepresented features like rivers, forests, and residential areas. The study ensures that AI mapping tools no longer ignore rare but vital geographical features simply because they are not encountered as often as common farmland.

In typical satellite datasets, such as the European Eurostat images used in this study, common features, such as Annual Crops, dominate the data, while features like rivers or specific forest types are rare. This is known as the class imbalance problem in mapping lingo. When a standard machine learning model is trained on this lopsided data, it becomes lazy, often defaulting to the majority class and failing to identify the rarer ones. 

To fix this, the researchers turned to Generative Adversarial Networks, or GANs. These are systems in which two neural networks, a Generator and a Discriminator, compete. The Generator acts like an art forger trying to create a realistic fake image, while the Discriminator acts like a detective trying to spot the fraud. Over time, the Generator becomes so skilled that it produces images nearly indistinguishable from real satellite photos.

Next, the team improved on the GAN by integrating a Variational Autoencoder (VAE) into the process. Traditional GANs often suffer from mode collapse, a technical failure where the AI gets stuck in a loop, producing the same repetitive image over and over again. By using the VAE, the MO-DGAN model learns a deeper, more flexible mathematical map of what an image should look like, allowing it to generate a wide variety of diverse, high-quality samples. 

Furthermore, while older models often struggled to handle multiple types of rare landscapes simultaneously, this new approach can handle several minority classes simultaneously. The researchers also conducted a rigorous competition among five world-class computer architectures, finding that a system called ResNet101 was the most effective detector for their model, achieving much higher classification accuracy than ever before.

The current model was tested primarily on standard RGB (Red, Green, Blue) imagery, which is what the human eye sees. However, satellites often capture multispectral and hyperspectral data, invisible bands of light that can reveal things like the chemical composition of soil or the health of a leaf. The researchers noted that more work is needed to adapt it to these complex, multi-layered data types. Additionally, the system still requires at least a small handful of real images to start its imagination process, meaning it may still struggle in scenarios where labelled data is almost entirely non-existent.

The benefits of this work for modern society are profound and practical. By creating accurate Land Use/Land Cover (LULC) maps, city planners can more effectively manage urban expansion, and environmentalists can better track the loss of endangered wetlands or forests. In the event of a natural disaster, such as a flood, having an AI that can accurately distinguish between a permanent river and a newly flooded residential area is a matter of life and death. Ultimately, this research provides the tools needed for a more precise digital mirror of our planet, allowing humanity to monitor and protect the Earth’s surface with unprecedented clarity.

English

Search Research Matters