Under-water SONAR Classification

Train underwater SONAR image classification models and generate explanations using LIME-based Explainable Artificial Intelligence

MIT License Python 3.6+ TensorFlow/Keras Transfer Learning XAI/LIME

Why This Project?

Powerful, flexible, and explainable underwater SONAR image classification

Transfer Learning

Leverage pre-trained state-of-the-art CNN models (VGG, ResNet, Inception, etc.) for robust underwater SONAR image classification.

Explainable AI (XAI)

Generate visual explanations using LIME and SP-LIME to understand model predictions and build trust in critical applications.

Flexible Training

Customizable training pipeline with support for multiple optimizers, learning rates, batch sizes, and early stopping.

Data Augmentation

Built-in data augmentation to improve model generalization and prevent overfitting on limited sonar datasets.

Comprehensive Testing

Evaluate models with confusion matrices, classification reports, and detailed performance metrics.

Easy Inference

Simple API for making predictions on new sonar images with or without explanations.

Architecture

System architecture for underwater SONAR image classification with XAI

Under-water SONAR Classification Architecture

Workflow

The pipeline consists of data loading with optional augmentation, transfer learning-based model training using pre-trained CNN architectures, and inference with optional LIME/SP-LIME explanations for model interpretability.

Supported Models

Choose from a wide range of pre-trained CNN architectures

VGG16

VGG19

ResNet50

ResNet101

InceptionV3

DenseNet121

DenseNet201

MobileNetV2

Xception

InceptionResNetV2

NASNetLarge

NASNetMobile

EfficientNetB0

EfficientNetB7

Documentation

Everything you need to get started

Quick Start

Get started with underwater SONAR classification in minutes

Installation

# Clone the repository
git clone https://github.com/Purushothaman-natarajan/Under-water-sonar-image-classification.git
cd Under-water-sonar-image-classification

# Install dependencies
pip install -r requirements.txt

Dataset Structure

├── Dataset (Raw)
  ├── class_name_1
  │  └── *.jpg
  ├── class_name_2
  │  └── *.jpg
  ├── class_name_3
  │  └── *.jpg
  └── class_name_4
     └── *.jpg

Step 1: Data Loading & Processing

# Process and prepare dataset
python data_loader.py --path ./data --target_folder ./processed_data --dim 224 --batch_size 32 --num_workers 4 --augment_data

Step 2: Training

# Train with VGG16
python train.py --base_models VGG16 --shape 224 224 3 --data_path ./processed_data --log_dir ./logs --model_dir ./models --epochs 100 --optimizer adam --learning_rate 0.001 --batch_size 32 --patience 10

# Train with multiple models
python train.py --base_models VGG16 ResNet50 --shape 224 224 3 --data_path ./processed_data --log_dir ./logs --model_dir ./models --epochs 100

Step 3: Testing

# Test trained model
python test.py --model_path ./models/vgg16_model.keras --test_dir ./test_data --train_dir ./processed_data/train --log_dir ./logs

Step 4: Prediction

# Predict on new image
python predict.py --model_path ./models/vgg16_model.keras --img_path ./test_image.jpg --train_dir ./processed_data/train

Arguments Reference

1

data_loader.py

--path, --target_folder, --dim, --batch_size, --num_workers, --augment_data

2

train.py

--base_models, --shape, --data_path, --log_dir, --model_dir, --epochs, --optimizer, --learning_rate, --batch_size, --patience

3

test.py

--model_path, --test_dir, --train_dir, --log_dir

Explainable AI (XAI)

Generate visual explanations for model predictions using LIME

LIME

Local Interpretable Model-agnostic Explanations (LIME) generates local explanations by perturbing the input image and observing how the prediction changes. It identifies the most important regions for the model's decision.

SP-LIME

Sub-Modular Picks LIME selects a representative subset of explanations that cover different image regions while avoiding redundancy, providing a more comprehensive view of model behavior.

Generate Explanations

# Generate LIME explanation
python predict_and_explain.py --image_path ./test_image.jpg --model_path ./models/vgg16_model.keras --train_directory ./processed_data/train --explanation_method lime --num_samples 100 --num_features 10

# Generate SP-LIME explanation
python predict_and_explain.py --image_path ./test_image.jpg --model_path ./models/vgg16_model.keras --train_directory ./processed_data/train --explanation_method splime --num_samples 100 --num_features 10 --segmentation_alg quickshift

Arguments

Required

--image_path, --model_path, --train_directory, --explanation_method

Optional

--num_samples (default: 100), --num_features (default: 10), --segmentation_alg (default: quickshift), --kernel_size (default: 2), --max_dist (default: 200), --ratio (default: 0.1)

Citation

If you use this project in your research, please cite our paper

@article{natarajan2024underwater,
  title={Underwater SONAR Image Classification and Analysis using LIME-based Explainable Artificial Intelligence},
  author={Natarajan, Purushothaman and Nambiar, Athira},
  journal={arXiv preprint arXiv:2408.12837},
  year={2024}
}