Deutsch
 
Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Konferenzbeitrag

Deep learning of tsunami building damage from multimodal physical parameters for real-time damage assessment

Urheber*innen

Vescovo,  Ruben
IUGG 2023, General Assemblies, 1 General, International Union of Geodesy and Geophysics (IUGG), External Organizations;

Mas,  Erick
IUGG 2023, General Assemblies, 1 General, International Union of Geodesy and Geophysics (IUGG), External Organizations;

Adriano,  Bruno
IUGG 2023, General Assemblies, 1 General, International Union of Geodesy and Geophysics (IUGG), External Organizations;

Koshimura,  Shunichi
IUGG 2023, General Assemblies, 1 General, International Union of Geodesy and Geophysics (IUGG), External Organizations;

Externe Ressourcen
Es sind keine externen Ressourcen hinterlegt
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in GFZpublic verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Vescovo, R., Mas, E., Adriano, B., Koshimura, S. (2023): Deep learning of tsunami building damage from multimodal physical parameters for real-time damage assessment, XXVIII General Assembly of the International Union of Geodesy and Geophysics (IUGG) (Berlin 2023).
https://doi.org/10.57757/IUGG23-2256


Zitierlink: https://gfzpublic.gfz-potsdam.de/pubman/item/item_5018526
Zusammenfassung
Tsunami building damage estimates are critical to post-disaster supply logistics and disaster management. Likewise, accurate damage estimates in a digital twin framework (Koshimura et al., 2023) enable more effective responses to disaster emergencies. However, meaningful results from an eventual disaster digital twin would be contingent on real-time inputs, at best. Hence, any practical implementation of a model that performs tsunami building damage estimates precludes reliance on post-disaster data, such as current damage detection models. By embedding physical representations of both the inundated environment and the tsunami, we propose to circumvent the limitations of optical imagery-based deep learning models. We aim to curtail the reliance on post-event data during evaluation, limit the constraints associated with the observation angle, and learn multi-level damage representations based on spatial and geophysical context. Our purpose is to learn these representations by training on physical parameters rather than optical imagery. Thus, we adapt deep learning architectures, initially developed for computer vision tasks, to take in a larger input tensor. Then, we extend and combine remotely sensed data, such as digital elevation models, land use, and cadastral maps. These extra layers embed the physical and spatial context relevant to tsunami inundation in the input. Finally, we test our modified architecture and benchmark the results against a random forest baseline.