New study uses mathematical analysis of walking patterns for early detection of Parkinson’s disorder.

Denoising images in a jiffy

May 25,2017
Read time: 4 mins

A shuttle radar topographic image over lower Himalayas

Haven’t we all tried to get that perfect photo of a dinner table, only to be disappointed with the lack of ambient light, resulting in a poor quality image? Blame it on the ‘noise’ or random variation of brightness or colour information. However, such noise in images not only results in an unpleasant digital experience, but could also make a doctor’s diagnosis error prone, putting lives at stake. Denoising digital images, taken either by a smartphone’s camera or from by a medical imaging machine, is extremely important in imaging applications. In a new study, researchers from Video Analytics Laboratory at the Department of Computational and Data Sciences, Indian Institute of Science (IISc), have proposed a fast and efficient algorithm to remove image noise.

Noise can creep into images during image capture, transmission or storage presenting themselves as grains in the image, or as dark spots in bright areas and bright spots in dark areas. A good image-processing algorithm should get rid of most of such noise as fast as possible. In cases of large images like satellite images or video streams, the need for a faster denoising algorithm becomes paramount. This new study, supported by Space Technology Cell of IISc, proposes an algorithm based on a method called ‘NL-means’ that uses information of similar adjacent patches to eliminate any non-consistent pixel values.

“NL-Means comes from a class of denoising algorithms which uses neighbourhood information to obtain a reasonable estimate of the denoised patch”, says Mr. Nitish Divakar, a co-author of the study at Video Analytics Lab, IISc. Though simpler than few existing methods, NL-means requires many input pixels to compute the value of few output pixels, thus making the process complex, computing resources intensive and time consuming.

In the newly developed ‘approximate NL-means’ algorithm, the researchers use a method called ‘Locality Sensitive Hashing’ for fast estimation of similarity between patches. Finding neighbouring pixels is similar to finding clusters of pixels that are close together and have similar values. Since finding such clusters becomes computationally a very expensive operation, ‘Locality Sensitive Hashing’ puts similar pixels into ‘buckets’, thus reducing a large data set into a smaller one and considerably accelerating the denoising algorithm.

The algorithm proposed in the study is more suitable for implementation on Graphical Processing Units (GPUs) that can process instructions in parallel, and hence faster than a normal processor. Algorithms that execute similar set of instructions on different data sets can run extremely fast on GPUs. The researchers carried out experiments and performance measurement on the NVIDIA Quadro 2000 GPU using a standard set of images to measure performance.

The researchers found that the new ‘approximate NL-means’ algorithm ran at least twice as fast as the conventional ‘NL-means’ algorithm on a standard CPU with a single thread and no parallel operation. However, the same algorithm did even better on parallel processors or GPU where the speed improved at least by a factor of 100. They observed that the speed advantage of the new algorithm increased with the increase in image size. They also evaluated the denoising performance for various images for various noise levels and found that the performance of the ‘approximate NL-means’ algorithm was well above 90% of the ‘NL-means’ algorithm.

But what kind of images could benefit from such superfast processing?

“Satellite images captured nowadays are more than 10K by 10K pixels resolution. They require faster denoising algorithms such as ours to be practically useful. The primary objective of this work was to develop a scalable denoising algorithm for huge satellite images” comments Prof. Venkatesh Babu.

On being able to extend the technique to video streams, the researchers believe that though this study did not include specific experiments with video, the method can be readily extended to videos. Further research on this algorithm can also take care of temporal redundancy – a technique used widely in video compression.