
India is primarily an agrarian economy, being the second largest in terms of farm output and the largest in terms of crop cover. The country relies on healthy crops to feed its growing population, but plant diseases are a constant threat, capable of wiping out huge portions of harvests. Globally, over 20% of crop losses are attributed to plant diseases. Farmers have traditionally relied on broad applications of fertilisers and pesticides, but these can harm the environment and aren't always the most effective way to tackle specific problems. If we could spot the symptoms of individual plants in a field, just by looking at an image, the spread of diseases could be avoided, without resorting to chemicals. That's the promise of smart agriculture, which uses technology to monitor and manage crops more precisely.
Did You Know? Crops like sesame, linseed, mustard, castor, mung bean, black gram, horse gram, grass pea (khesari), fenugreek, cotton, jujube, grapes, dates, jack fruit, mango, and mulberry is believed to have been grown in India 3000-6000 years ago. |
Modern tools can offer solutions utilising Cyber Physical Systems (CPS), Internet of Things (IoT) and Artificial Intelligence (AI). IoT and CPS involve enabling devices to communicate with each other, allowing efficient decision-making. AI tools, on the other hand, allow individual devices to make decisions based on patterns in the data. However, these technologies often require substantial computing power to run. The data is usually analysed using cloud computing, which sends the data to a centralised server far away from the farm, delaying any decision-making. For smart agriculture to work efficiently, the computing devices need to be small, affordable and efficient at their task, and be closer to the crops.
Edge computing is a promising solution that brings the computing much closer to where the data is collected, in this case, the farm. However, this also limits the computing power and memory available for models to run on. For CPS and AI-based models to function effectively on Edge computing, they need to be small, computationally efficient, and optimised for their specific tasks.
Researchers from the Indian Institute of Technology (IIT) Patna, the Indian Institute of Technology (IIT) Bombay, and the Rajiv Gandhi Institute of Petroleum Technology, Amethi, have addressed this problem in their new study. The team developed a new system called EdgePlantNet, specifically designed to run on small, affordable devices like the Raspberry Pi. Their goal was to create something that is not only accurate but also lightweight and fast enough for real-time use.
EdgePlantNet uses Convolutional Neural Network (CNN),a type of neural network that is good at analysing images, by comparison. The team developed a multi-layered perceptron-based spatial attention mechanism (MLP-ATCNN), a specific kind of dual-channel CNN. This model examines the plant leaf from two perspectives simultaneously. It examines the original picture of the leaf and a spatially processed version, where only the parts that appear potentially diseased are retained. They achieve this second image through a process called segmentation, which uses a technique called k-means clustering to group similar colours and identify the non-green, potentially sick areas.
By examining both the original leaf and the segmented version simultaneously through two parallel branches of the CNN, the system gains a more comprehensive understanding. It can analyse the overall structure of the leaf from the original image while also focusing on the specific details of the diseased spots from the segmented image. This dual approach helps the CNN learn more robust features and get better at recognising the patterns of different diseases.
The researchers designed the MLP-ATCNN to be incredibly efficient. Unlike many powerful AI models that have millions of parameters (which are like the AI's internal knobs and dials that it adjusts during training), their model has less than 200,000. The attention part itself is relatively small, with fewer than 5,000 parameters. This lightweight design is crucial for enabling it to run smoothly on small devices, such as those used in IoT systems.
They tested EdgePlantNet on two different datasets of plant leaf images. One, called PlantVillage, has images taken in a controlled lab setting with uniform backgrounds. The other, called BPLD, is more challenging because it includes images taken in natural settings with diverse backgrounds. They compared EdgePlantNet to several other state-of-the-art AI models.
The results were impressive. EdgePlantNet achieved very high accuracy rates in detecting diseases across various plants, including apple, potato, tomato, maize, and black gram, often outperforming other models. For example, it achieved 99.2% accuracy for potato leaves and 97.1% for tomato leaves on the PlantVillage dataset, as well as 95.72% for black gram leaves on the more challenging BPLD dataset. Crucially, it performed exceptionally well on "few-shot" diseases – those for which only a small number of training examples were available. This is important because in the real world, you might encounter new or rare diseases.
Compared to other highly accurate models, such as ResNet and DenseNet, EdgePlantNet was significantly faster and much smaller. While some other faster models existed, they weren't as accurate, especially when dealing with the diverse backgrounds found in real fields. EdgePlantNet struck the best balance between high accuracy, small size, and fast performance, making it the most suitable option for running on edge devices. They demonstrated this by successfully implementing it on a Raspberry Pi, showing that it could process images at a speed of around 3.65 frames per second, while using only about 4 megabytes of memory, an order of magnitude smaller than other models.
The research offers a practical step forward for smart agriculture. By providing a system that can accurately detect plant diseases in real-time using affordable, low-power devices, EdgePlantNet empowers farmers with the information they need to intervene quickly and precisely. This could lead to healthier crops, higher yields, reduced reliance on broad-spectrum pesticides, and ultimately, more sustainable and efficient food production for everyone. While the researchers note that future work could explore detecting multiple diseases on a single leaf or handling even more extreme environmental variations, this system represents a significant advancement in bringing powerful AI tools out of the lab and into the field.
This article was written with the help of generative AI and edited by an editor at Research Matters.