TLDR: A study developed a machine learning system to classify coral bleaching using underwater images. Comparing ResNet, ViT, and CNN models on a diverse dataset, the standard CNN achieved the highest accuracy of 88%, outperforming others and offering a robust, lightweight framework for autonomous coral monitoring.
Coral reefs are vital marine ecosystems, providing homes for countless marine organisms and protecting coastlines from natural disasters. However, these crucial ecosystems are under severe threat from pollution, ocean acidification, and rising sea temperatures, leading to widespread coral bleaching. This phenomenon occurs when corals expel the algae living in their tissues, causing them to turn white and become vulnerable to disease and death. Given the urgency of monitoring and protecting coral reefs, researchers are turning to advanced technologies like machine learning to help.
A recent study by Julio Jerison E. Macrohon, PhD, and Gordon Hung from the Hsinchu County American School in Taiwan introduces a new machine learning-based system designed to classify coral bleaching. Their work focuses on developing an efficient and accurate method for identifying healthy and bleached corals using diverse underwater image datasets. This is particularly important because previous studies often struggled with accuracy and generalizability, as they focused on corals from specific locations or lacked comprehensive model comparisons.
The researchers benchmarked and compared three cutting-edge deep learning models: Residual Neural Network (ResNet), Vision Transformer (ViT), and a standard Convolutional Neural Network (CNN). These models were trained on a dataset of 923 labeled multi-condition underwater images, featuring both healthy and bleached corals from various environments like deep seas, marshes, and coastal zones. This diverse dataset helps ensure the models can perform well across different real-world conditions.
After extensive hyperparameter tuning, the standard CNN model emerged as the top performer, achieving an impressive accuracy of 88%. This result surpassed existing benchmarks and demonstrated the CNN’s strong ability to capture local spatial hierarchies and dependencies in visual data. ResNet-50 also performed well with 86% accuracy, while the Vision Transformer (ViT) lagged behind with 64% accuracy. The study also used other evaluation metrics like precision, recall, and F1-score, confirming the CNN’s superior performance. The CNN also achieved the highest Area Under the Curve (AUC) value of 0.96 on the Receiver Operating Characteristic (ROC) curve, indicating its excellent ability to distinguish between healthy and bleached corals.
Also Read:
- SonarSweep: A Fusion Approach for Accurate Underwater Mapping
- FedReplay: Enhancing Smart Agriculture with Privacy-Preserving AI
The findings from this research offer significant insights into the potential for autonomous coral monitoring. The proposed framework is described as lightweight and flexible, meaning it can be implemented without requiring substantial computational power, making it accessible for wider application by researchers and biologists. This advancement could greatly aid in the timely detection of coral bleaching events, allowing for more effective conservation efforts and protection of these invaluable marine habitats. For more details, you can read the full research paper here.


