Please use this identifier to cite or link to this item: http://dr.iiserpune.ac.in:8080/xmlui/handle/123456789/8921
Title: Development of Continuous Spatiotemporal Flood Masks using Deep Learning and Remote Sensing
Authors: Singh, Manmeet
DAIVAJNA, VINAY
Dept. of Data Science
20171043
Keywords: Sentinel-1, Dynamic World, Water Mask, U-Net, Logistic Regression, XGBoost, Otsu, F1Score.
Sentinel-1
Dynamic World
Water Mask
U-Net
Logistic Regression
XGBoost
Otsu
F1Score
Issue Date: May-2024
Citation: 55
Abstract: Sentinel-1 is a satellite with a synthetic aperture radar(SAR) instrument. It is an active microwave satellite that provides data up to 10m resolution in all weather conditions and day/night, making it suitable for detecting floods. Water bodies appear dark in sentinel-1 due to microwaves' high absorbance, making detecting the water from the background possible. Satellites operating in the visible range of EM spectra are well-suited for detecting water bodies. However, clouds and shadows block visible light from reaching the satellite, making it challenging to get satellite images during flood events. In this work addressing this problem, we chose 20 different places on different kinds of terrains worldwide, like large complex rivers, high urban areas, small rivers, large lake areas, etc., to detect the water in all types of terrains. We have evaluated the performance of Logistic Regression, XGBoost, and U-Net that use sentinel-1 image as input and dynamic worl data(first 10m resolution near-real time land use land cover data developed by Google) as the target to detect the water bodies. After testing different models, U-Net outperformed all other models in various terrains and has an average F1 Score greater than 0.9.
URI: http://dr.iiserpune.ac.in:8080/xmlui/handle/123456789/8921
Appears in Collections:MS THESES

Files in This Item:
File Description SizeFormat 
20171043_Vinaykumar_Daivajna_MS_ThesisLicense Agreement4.25 MBAdobe PDFView/Open    Request a copy


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.