Enhancement of Underwater Images for Object Detection
Marine habitats are an increasingly relevant research field, and with more capable and affordable options for capturing images in underwater environments, the need for more effective pipelines to process these images becomes apparent. We compare different image enhancement methods, namely contrast limited adaptive histogram equalization (CLAHE), multi-scale retinex with color restoration (MSRCR), and a fusion-based approach on their efficacy in an object detection pipeline. Their specifice use in the training and inferencing process with convolutional neural network (CNN) models are evaluated. In our setup, we build flexible pipelines to train several models with different enhancement strategies for the training dataset and assess their detection capabilites by measuring their inferencing precision with differently enhanced test datasets. We chose the regions with CNN(R-CNN) architectures Faster R-CNN and Mask R-CNN for our analysis, as they are widely used and deployed in all sorts of practical applications. We found that the use of these enhancement methods during the training phase results in better models, though their application in an inferencing pipeline is still inconclusive. Our data shows, that a significant subset of the images would gain from some form of enhancement as these methods show to mitigate some of the image degradations introduced by the underwater environment. We, therefore, argue with this work for the necessity of a reliable method that determines the best enhancement procedure for each image as part of an extended detection process.