Name: | Description: | Size: | Format: | |
---|---|---|---|---|
10.17 MB | Adobe PDF |
Authors
Abstract(s)
Ao longo deste projeto automóvel, é abordada a verificação formal de redes neuronais
profundas (DNN) aplicadas à condução autónoma, com o objetivo de garantir a
robustez e segurança desses modelos em situações críticas. A verificação formal é
utilizada para avaliar as redes neuronais, visando prevenir falhas que possam resultar
em riscos para a segurança, principalmente no contexto de veículos autónomos. Esta
abordagem torna-se essencial, considerando o elevado impacto das decisões que as
redes neuronais tomam ao classificar diferentes classes de sinais de trânsito, algo
crucial para uma navegação segura.
Foram desenvolvidos, treinados e testados 3 modelos diferentes de redes neuronais
convolucionais (CNN) capazes de classificar mais de 40 categorias diferentes de
sinais de trânsito, utilizando a base de dados GTSRB, que contém cerca de 51839
amostras de imagens. A implementação foi realizada visando uma estrutura computacionalmente
eficiente e robusta para potencial uso futuro em veículos autónomos.
O trabalho também aborda a vulnerabilidade das redes neuronais a ataques adversariais,
como o FGSM e o PGD, que podem levar a falhas na classificação de imagens
de sinais de trânsito e, consequentemente, comprometer a robustez dos modelos.
Para isso, foram aplicados métodos de verificação formal, que fornecem garantias
matemáticas sobre a robustez e exatidão dos modelos desenvolvidos, mesmo quando
expostos a um determinado conjunto de valores de perturbação.
Os resultados obtidos demonstram que, não obstante das dificuldades inerentes
ao campo da verificação formal de DNN, é possível testar a robustez dos sistemas
de classificação de sinais de trânsito, contribuindo para uma maior segurança no
domínio da condução autónoma.
Este estudo contribui para o avanço na verificação formal de redes neuronais
treinadas para a classificação de sinais de trânsito, destacando a importância de
continuar a investigar soluções que garantam a segurança e a fiabilidade em sistemas
críticos, como a condução autónoma.
Throughout this automotive project, the formal verification of deep neural networks (DNN) applied to autonomous driving is addressed, with the aim of guaranteeing the robustness and safety of these models in critical situations. Formal verification is used to evaluate neural networks in order to prevent failures that could result in safety risks, especially in the context of autonomous vehicles. This approach is essential given the high impact of the decisions that neural networks make when classifying different classes of traffic signs, which is crucial when it comes to a safe navigation. Three different convolutional neural network (CNN) models capable of classifying more than 40 different categories of road signs were developed, trained and tested using the GTSRB database, which contains around 51839 image samples. The implementation was carried out with the aim of creating a computationally efficient and robust structure for potential future use in autonomous vehicles. The work also addresses the vulnerability of neural networks to adversarial attacks, such as FGSM and PGD, which can lead to failures in the classification of traffic sign images and, consequently, compromise the robustness of the models. To this end, formal verification methods were applied, which provide mathematical guarantees about the robustness and accuracy of the models developed, even when exposed to a given set of disturbance values. The results obtained show that, despite the difficulties inherent in the field of formal verification of DNN, it is possible to test the robustness of traffic sign classification systems, contributing to a greater safety in the field of autonomous driving. This study contributes towards advances in the formal verification of neural networks trained for traffic sign classification, highlighting the importance of continuing to investigate solutions that guarantee safety and reliability in critical systems, such as autonomous driving.
Throughout this automotive project, the formal verification of deep neural networks (DNN) applied to autonomous driving is addressed, with the aim of guaranteeing the robustness and safety of these models in critical situations. Formal verification is used to evaluate neural networks in order to prevent failures that could result in safety risks, especially in the context of autonomous vehicles. This approach is essential given the high impact of the decisions that neural networks make when classifying different classes of traffic signs, which is crucial when it comes to a safe navigation. Three different convolutional neural network (CNN) models capable of classifying more than 40 different categories of road signs were developed, trained and tested using the GTSRB database, which contains around 51839 image samples. The implementation was carried out with the aim of creating a computationally efficient and robust structure for potential future use in autonomous vehicles. The work also addresses the vulnerability of neural networks to adversarial attacks, such as FGSM and PGD, which can lead to failures in the classification of traffic sign images and, consequently, compromise the robustness of the models. To this end, formal verification methods were applied, which provide mathematical guarantees about the robustness and accuracy of the models developed, even when exposed to a given set of disturbance values. The results obtained show that, despite the difficulties inherent in the field of formal verification of DNN, it is possible to test the robustness of traffic sign classification systems, contributing to a greater safety in the field of autonomous driving. This study contributes towards advances in the formal verification of neural networks trained for traffic sign classification, highlighting the importance of continuing to investigate solutions that guarantee safety and reliability in critical systems, such as autonomous driving.
Description
Keywords
Verificação formal Rede neuronal convolucional Condução autónoma Propriedade de robustez Ataques adversariais MATLAB