AI-Enabled Wildfire Detection Using Satellite Imagery
Date: February 20, 2024
Abstract: Wildfires in California have grown in size and intensity since the 1980s, causing significant damage to the environment and human communities. Our study is focused on developing a deep-learning framework for detecting and monitoring wildfires using satellite imagery. Utilizing the Sentinel-2 satellite, we harnessed multispectral data across various bands with resolutions ranging from 10m to 60m. Specifically, bands 12 (SWIR, 2190 nm), 11 (SWIR, 1610 nm), and 4 (Red, 665 nm) were critical in our analysis due to their sensitivity to high temperatures and their ability to penetrate smoke, providing spectral information even through dense smoke that could obscure traditional RGB imaging. Our methodology involved the creation of a large-scale dataset downloaded through Google Earth Engine, comprising over 50 high-resolution images (1792x1792 pixels), which were further divided into 2450 smaller images for enhanced model training efficiency. Label Studio was employed for fire segmentation to produce accurate masks for our U-Net-based segmentation model. Data augmentation techniques were applied to triple our dataset, yielding 7350 images. About 65% of images were allocated for training, 15% for validation, and 20% for model testing. We trained a U-Net deep learning model, known for its effectiveness in image segmentation with multiple convolutional layers, dropout layers, and max pooling layers, totaling 1,941,105 trainable parameters. Training over 100 epochs demonstrated consistent model accuracy and minimal loss. Key performance metrics include an accuracy of 98.47%, precision of 90.76%, recall of 80.47%, and an F-score of 85.31%. The success of this approach demonstrates the compelling capabilities of combining advanced deep learning techniques with multispectral satellite imagery for effective wildfire monitoring, offering an invaluable tool for disaster management and environmental conservation.
Keywords: Wildfire Detection, Satellite Imagery, Sentinel-2 Satellite, Deep Learning, U-Net, Image Segmentation
Presenter: Dr. Ali Moghimi is an Assistant Professor of Teaching in the Department of Biological and Agricultural Engineering, where he is the lead faculty advisor for the Agricultural and Environmental Technology major. Ali teaches a wide range of courses, including, TAE 10 (introduction to Technology), TAE 30 (communications and Computing Technology), ABT 60 (introduction to drone technology), ABT/LDA 150 (introduction to geographical information systems – GIS), and ABT/HYD 182 (Environmental Analysis using GIS). Ali’s research interests include remote sensing, GIS, and applied machine learning and deep learning.