Titre : |
Application of Adversarial Attacks on Machine Learning Systems |
Type de document : |
texte imprimé |
Auteurs : |
Sara Israa Zerroug, Auteur ; Amani Maissoune Mellas ; Yasmine Harbi, Directeur de thèse |
Editeur : |
Setif:UFA |
Année de publication : |
2024 |
Importance : |
1 vol (65 f .) |
Format : |
29 cm |
Langues : |
Anglais (eng) |
Catégories : |
Thèses & Mémoires:Informatique
|
Mots-clés : |
Artificial Intelligence (AI)
Machine Learning (ML)
Adversarial Machine
Learning (AML)
Fast Gradient Sign Method (FGSM) |
Index. décimale : |
004 - Informatique |
Résumé : |
Machine Learning (ML), a subset of Artificial Intelligence (AI), have revolutionized
various domains by enabling machines to learn from data and make intelligent decisions
autonomously. However, the integration of ML into critical applications, such as
healthcare, raises concerns about security and robustness. This study investigates the
intersection of ML and security through a deep learning-based brain tumor classification
model. We develop a model capable of accurately classifying brain tumor types.
Furthermore, we explore the vulnerability of the model to adversarial attacks, employing
techniques such as Fast Gradient Sign Method (FGSM) and Projected Gradient
Descent (PGD). Our experiments encompass comprehensive evaluations of model performance
metrics, including accuracy, precision, recall, FPR and F1-score. Through
meticulous analysis, we uncover insights into the impact of perturbation strength and
subset size on the model’s susceptibility to attacks. Our findings underscore the importance
of robust defense mechanisms in mitigating adversarial threats in ML-driven
systems. By elucidating the challenges and opportunities in brain tumor classification
using ML techniques, this study contributes to the broader understanding of AI’s role
in healthcare and the imperative of ensuring security in ML applications. |
Note de contenu : |
Sommaire
Table of Contents ii
List of Figures v
List of Tables vii
Table of Abbreviations viii
General Introduction 1
1 State-of-the-art 3
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Overview of Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.1 Types of Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.2 Types of Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.3 Most Popular Algorithms . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.4 ML Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.2.5 ML Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.3 Adversarial Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.3.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.3.2 Adversarial ML Model . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.3.3 Adversarial Attacks Types . . . . . . . . . . . . . . . . . . . . . . . 15
1.3.4 Adversarial Attacks Methods . . . . . . . . . . . . . . . . . . . . . 18
1.4 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2 Contribution 22
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.2 Problem Statement, Objectives, and Hypothesis . . . . . . . . . . . . . . 22
2.3 Proposed DL-based Healthcare Model . . . . . . . . . . . . . . . . . . . . 23
2.3.1 Dataset Description . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.3.2 Dataset Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.3.3 Data Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.3.4 Medical Image Classification . . . . . . . . . . . . . . . . . . . . . . 26
2.4 Proposed Adversarial DL-based Healthcare Model . . . . . . . . . . . . . 28
2.4.1 Data Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.4.2 Evasion Attack Steps . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3 Experiments and Results 35
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.2 Hardware and Software Environments . . . . . . . . . . . . . . . . . . . . 35
3.2.1 Experiment Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.3 Evaluation Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.4 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.4.1 Proposed DL-based Healthcare Model . . . . . . . . . . . . . . . . 39
3.4.2 Proposed Adversarial DL-based Healthcare Model . . . . . . . . . 42
3.5 Comparison to Related Works . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 |
Côte titre : |
MAI/0887 |
Application of Adversarial Attacks on Machine Learning Systems [texte imprimé] / Sara Israa Zerroug, Auteur ; Amani Maissoune Mellas ; Yasmine Harbi, Directeur de thèse . - [S.l.] : Setif:UFA, 2024 . - 1 vol (65 f .) ; 29 cm. Langues : Anglais ( eng)
Catégories : |
Thèses & Mémoires:Informatique
|
Mots-clés : |
Artificial Intelligence (AI)
Machine Learning (ML)
Adversarial Machine
Learning (AML)
Fast Gradient Sign Method (FGSM) |
Index. décimale : |
004 - Informatique |
Résumé : |
Machine Learning (ML), a subset of Artificial Intelligence (AI), have revolutionized
various domains by enabling machines to learn from data and make intelligent decisions
autonomously. However, the integration of ML into critical applications, such as
healthcare, raises concerns about security and robustness. This study investigates the
intersection of ML and security through a deep learning-based brain tumor classification
model. We develop a model capable of accurately classifying brain tumor types.
Furthermore, we explore the vulnerability of the model to adversarial attacks, employing
techniques such as Fast Gradient Sign Method (FGSM) and Projected Gradient
Descent (PGD). Our experiments encompass comprehensive evaluations of model performance
metrics, including accuracy, precision, recall, FPR and F1-score. Through
meticulous analysis, we uncover insights into the impact of perturbation strength and
subset size on the model’s susceptibility to attacks. Our findings underscore the importance
of robust defense mechanisms in mitigating adversarial threats in ML-driven
systems. By elucidating the challenges and opportunities in brain tumor classification
using ML techniques, this study contributes to the broader understanding of AI’s role
in healthcare and the imperative of ensuring security in ML applications. |
Note de contenu : |
Sommaire
Table of Contents ii
List of Figures v
List of Tables vii
Table of Abbreviations viii
General Introduction 1
1 State-of-the-art 3
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Overview of Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.1 Types of Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.2 Types of Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.3 Most Popular Algorithms . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.4 ML Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.2.5 ML Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.3 Adversarial Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.3.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.3.2 Adversarial ML Model . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.3.3 Adversarial Attacks Types . . . . . . . . . . . . . . . . . . . . . . . 15
1.3.4 Adversarial Attacks Methods . . . . . . . . . . . . . . . . . . . . . 18
1.4 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2 Contribution 22
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.2 Problem Statement, Objectives, and Hypothesis . . . . . . . . . . . . . . 22
2.3 Proposed DL-based Healthcare Model . . . . . . . . . . . . . . . . . . . . 23
2.3.1 Dataset Description . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.3.2 Dataset Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.3.3 Data Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.3.4 Medical Image Classification . . . . . . . . . . . . . . . . . . . . . . 26
2.4 Proposed Adversarial DL-based Healthcare Model . . . . . . . . . . . . . 28
2.4.1 Data Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.4.2 Evasion Attack Steps . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3 Experiments and Results 35
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.2 Hardware and Software Environments . . . . . . . . . . . . . . . . . . . . 35
3.2.1 Experiment Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.3 Evaluation Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.4 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.4.1 Proposed DL-based Healthcare Model . . . . . . . . . . . . . . . . 39
3.4.2 Proposed Adversarial DL-based Healthcare Model . . . . . . . . . 42
3.5 Comparison to Related Works . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 |
Côte titre : |
MAI/0887 |
|