Name: | Description: | Size: | Format: | |
---|---|---|---|---|
5.86 MB | Adobe PDF |
Advisor(s)
Abstract(s)
Rising global fire incidents necessitate effective solutions, with forest surveillance emerging as a crucial
strategy. This paper proposes a complete solution using technology that integrates visible and infrared
spectrum images through Unmanned Aerial Vehicles (UAVs) for enhanced detection of people and vehicles in
forest environments. Unlike existing computer vision models relying on single-sensor imagery, this approach
overcomes limitations posed by limited spectrum coverage, particularly addressing challenges in low-light
conditions, fog, or smoke. The developed 4-channel model uses both types of images to take advantage of the
strengths of each one simultaneously. This article presents the development and implementation of a solution
for forest monitoring ranging from the transmission of images captured by a UAV to their analysis with an
object detection model without human intervention. This model consists of a new version of the YOLOv5 (You
Only Look Once) architecture. After the model analyzes the images, the results can be observed on a web
platform on any device, anywhere in the world. For the model training, a dataset with thermal and visible
images from the aerial perspective was captured with a UAV. From the development of this proposal, a new 4-
channel model was created, presenting a substantial increase in precision and mAP (Mean Average Precision)
metrics compared to traditional SOTA (state-of-the-art) models that only make use of red, green, and blue
(RGB) images. Allied with the increase in precision, we confirmed the hypothesis that our model would perform
better in conditions unfavorable to RGB images, identifying objects in situations with low light and reduced
visibility with partial occlusions. With the model’s training using our dataset, we observed a significant increase
in the model’s performance for images in the aerial perspective. This study introduces a modular system
architecture featuring key modules: multisensor image capture, transmission, processing, analysis, and results
presentation. Powered by an innovative object detection deep-learning model, these components collaborate
to enable real-time, efficient, and distributed forest monitoring across diverse environments.
Description
Acknowledgments:
This work was financed by national funds through the Portuguese Foundation for Science and Technology - FCT, under the Project ‘‘DBoidS - Digital twin Boids fire prevention System’’ Ref. PTDC/CCICOM/2416/2021
Publisher Policy (Published Version): Institutional Repository - This pathway has an Open Access fee associated with it.
Publisher Policy (Published Version): Institutional Repository - This pathway has an Open Access fee associated with it.
Keywords
Deep learning Computer vision Image fusion Object detection Real-time Unmanned Aerial Vehicle
Citation
Marques, T., Carreira, S., Miragaia, R., Ramos, J., & Pereira, A. (2024). Applying deep learning to real-time UAV-based forest monitoring: Leveraging multi-sensor imagery for improved results. Expert Systems with Applications, 245, 123107. https://doi.org/10.1016/j.eswa.2023.123107
Publisher
Elsevier