Home / Articles
MULTIMODAL DATA FUSION FOR ENHANCED DISASTER DETECTION AND CLASSIFICATION |
![]() |
Author Name M.Mahmudha Nasrin, M.Muhseena Farvin, K.Subashree, Dr.M.Priya M.E,Ph.D Abstract Disasters like floods and earthquakes are critical threats to human life and infrastructure, and there is a need for quick and precise detection systems. This project suggests a multimodal deep learning system for disaster detection by combining YOLO (You Only Look Once) for flood image classification and Multilayer Perceptron (MLP) for earthquake prediction based on sensor data. The system deals with heterogeneous sources of data—satellite/surveillance imagery for floods and seismic data for earthquakes—to provide real-time disaster monitoring and early warnings. The models were also trained on **cleaned datasets of RoboFlow (floods) and Kaggle (earthquakes) with a 90% overall accuracy after hyperparameter optimization. Computational complexity and false positives were addressed with model refinement and data refinement. The system's performance demonstrates the possibility of it being implemented within disaster areas with future research applied to incorporate social media analysis as well as edge-computing technology for scaling.
Keywords: Deep Learning, YOLO, MLP, Multimodal Data, Disaster Detection, Real-Time Alerts* Published On : 2025-04-06 Article Download : ![]() |