Researchers develop MassMIND: Massachusetts Maritime INfrared Dataset consisting of images captured in long-wave infrared (LWIR) spectrum.

Advances in deep learning algorithms have led to an exponential increase in research on the autonomy of terrestrial vehicles in recent years. Publicly available tagged data sets, open source software, the deployment of innovative deep learning architectures, and increases in hardware computing capabilities are all important factors for this progress. The marine environment, with its abundance of routine duties such as observation, observation and long-distance transit, provides a great opportunity for autonomous navigation. Availability of sufficient data sets is a critical dependency for achieving independence. Sensors, especially electro-optical (EO) cameras, long-wave infrared (LWIR) cameras, radar, and lidars, help efficiently collect large amounts of data about the environment.

Due to their adaptability and abundance of convolutional neural network (CNN) designs that learn from tagged images, EO cameras are commonly used to capture images. The difficulty lies in understanding this data and producing labeled data sets that can be used to train deep learning models. The image is usually annotated using two methods. The first step is to discover the objects of interest by drawing squares surrounding them. Second, conceptually segment the image by assigning a class name to each pixel. The first method is faster because it focuses on specific targets, while the second method is more accurate because it breaks down the entire scene. Since the marine environment is mainly exposed to the sky and the ocean, the lighting conditions change dramatically from those on land.

Glittering, reflection, water dynamics, and fog are all recurring phenomena. These conditions reduce the quality of optical images. Horizon detection is another typical problem encountered when using optical images. On the other hand, LWIR images have special advantages under extreme lighting conditions, as shown in the figure below. LWIR sensors have been used in the work of marine robotics researchers. The researchers created a distinctive set of visual and dual LWIR images of different types of ships in the offshore area. However, this data set has several drawbacks, which will be explained in the next section.

This paper presents a data set of more than 2,900 LWIR marine images from the Massachusetts Bay area, including the Charles River and Boston Harbor, that capture scenes as diverse as crowded marine environments, construction, living entities, and near-shore scenes at different seasons and times. today. The images in the data set are categorized into seven categories using example and semantic segmentation. They also evaluate the performance of the dataset across three common deep learning architectures (UNet, PSPNet and DeepLabv3) and describe their findings in terms of barrier identification and scene perception. The public can freely access the data set.

With these datasets, they want to encourage research interest in the topic of cognition in marine autonomy. The grouping of devices used in data collection is described, along with an explanation of the data collection and segmentation methods. They also share evaluation results for the three designs. In conclusion, the paper also describes the state of the art in the marine area.

This Article is written as a research summary article by Marktechpost Staff based on the research paper 'MassMIND: Massachusetts Maritime
INfrared Dataset'. All Credit For This Research Goes To Researchers on This Project. Check out the paper and github link.

Please Don't Forget To Join Our ML Subreddit


Content writing consultant trainee at Marktechpost.


Leave a Comment