Authors
Ashwath Ramakrishnan (CIRES,Earth Lab), Ryan Cassotto (CIRES,ESOC), Cibele Amaral (CIRES,Earth Lab), Kristy F. Tiampo (CIRES,ESOC, Department of Geological Sciences), E. Natasha Stavros (WKID Solutions LLC)
Abstract
Generating timely and precise assessments of wildfire impacts is vital to effective response and mitigation strategies. Conventional methods predominantly rely on optical satellite imagery which encounter obstacles such as smoke and clouds. Synthetic Aperture Radar (SAR) offers a promising all-weather alternative, albeit with significant processing requirements. Our work proposes an approach to automate wildfire image segmentation through two deep learning models trained on SAR and optical data obtained through open-source Sentinel and Landsat APIs. The models were designed based on the DeepLabV3 architecture with a ResNet-101 backbone. The training annotations were created using SAR coherence and polarimetric techniques and a differenced Normalized Burn Ratio (dNBR) thresholding method on the optical images. By leveraging the strengths of both modalities, our system mitigates inherent limitations, enhancing near real-time wildfire assessment capabilities. Users will be able to input wildfire coordinates, date and time parameters, prompting our software to dynamically select the optimal data source (SAR or optical) based on availability and quality. The deep learning models perform pixel-wise classification, yielding 10m resolution burned/unburned segmentation images. This automated process greatly reduces processing time and human intervention, facilitating rapid wildfire impact evaluation. Our approach promises greater availability and accessibility compared to traditional methods. Through the fusion of deep learning techniques and the integration of SAR and optical data, our software will provide actionable insights for wildfire management and response efforts.