Underwater Scene Segmentation by Deep Neural Network

Underwater Scene Segmentation by Deep Neural Network

Title: Underwater Scene Segmentation by Deep Neural Network
Authors: Yang Zhou (Loughborough University); Jiangtao Wang (Loughborough University); Baihua Li (Loughborough University); Qinggang Meng (Loughborough University); Emanuele Rocco (Witted Srl); Andrea Saiani (Witted Srl);
Year: 2019
Citation: Zhou, Y., Wang, J., Li, B., Meng, Q., Rocco, E., Saiani, A., (2019). Underwater Scene Segmentation by Deep Neural Network. UK-RAS19 Conference: “Embedded Intelligence: Enabling & Supporting RAS Technologies” Proceedings, 44-47. doi: 10.31256/UKRAS19.12

Abstract:

A deep neural network architecture is proposed in this paper for underwater scene semantic segmentation. The architecture consists of encoder and decoder networks. Pre- trained VGG-16 network is used as a feature extractor, while the decoder learns to expand the lower resolution feature maps. The network applies max un-pooling operator to avoid large number of learnable parameters, and, in order to make use of the feature maps in encoder network, it concatenates the feature maps with decoder and encoder for lower resolution feature maps. Our architecture shows capabilities of faster convergence and better accuracy. To get a clear view of underwater scene, an underwater enhancement neural network architecture is described in this paper and applied for training. It speeds up the training process and convergence rate in training.

Download PDF