Name: | Description: | Size: | Format: | |
---|---|---|---|---|
1.16 MB | Adobe PDF |
Advisor(s)
Abstract(s)
Human casualties in natural disasters have motivated tech-
nological innovations in Search and Rescue (SAR) activities. Di cult ac-
cess to places where res, tsunamis, earthquakes, or volcanoes eruptions
occur has been delaying rescue activities. Thus, technological advances
have gradually been nding their purpose in aiding to identify and nd
the best locations to put available resources and e orts to improve rescue
processes. In this scenario, the use of Unmanned Aerial Vehicles (UAV)
and Computer Vision (CV) techniques can be extremely valuable for
accelerating SAR activities. However, the computing capabilities of this
type of aerial vehicles are scarce and time to make decisions is also rele-
vant when determining the next steps. In this work, we compare di erent
Deep Learning (DL) imaging detectors for human detection in SAR im-
ages. A setup with drone-mounted cameras and mobile devices for drone
control and image processing is put in place in Ecuador, where volcanic
activity is frequent. The main focus is on the inference time in DL learn-
ing approaches, given the dynamic environment where decisions must be
fast. Results show that a slim version of the model YOLOv3, while using
less computing resources and fewer parameters than the original model,
still achieves comparable detection performance and is therefore more
appropriate for SAR approaches with limited computing resources.