Titre : |
Security of Machine Learning Systems against Adversarial Settings |
Type de document : |
texte imprimé |
Auteurs : |
Dounia Yessaad Cherif, Auteur ; Asma Bouhidel ; Yasmine Harbi, Directeur de thèse |
Editeur : |
Setif:UFA |
Année de publication : |
2024 |
Importance : |
1 vol (64 f .) |
Format : |
29 cm |
Langues : |
Anglais (eng) |
Catégories : |
Thèses & Mémoires:Informatique
|
Mots-clés : |
Machine Learning
Adversarial attacks
Convolutional neural networks,
Healthcare applications
Medical imaging |
Index. décimale : |
004 - Informatique |
Résumé : |
Machine learning (ML) has gained widespread adoption across various domains due
to its significant advancements in recent years. It has shown remarkable efficacy in
addressing complex tasks, often approaching or surpassing human-level capabilities.
However, recent research has identified vulnerabilities such as adversarial attacks, which
exploit weaknesses in models to induce erroneous predictions. These attacks pose
substantial risks, particularly in critical domains such as autonomous vehicles and
healthcare, underscoring the necessity for robust defenses to ensure the reliability and
safety of machine learning systems.
This thesis introduces a novel approach to bolster the resilience of convolutional
neural network (CNN) models within the healthcare sector. Specifically, it focuses
on enhancing performance in medical image tasks, including diagnosis, prognosis, and
treatment planning. The study evaluates the robustness of CNN models against adversarial
attacks and proposes effective defense strategies. Techniques such as adversarial
training and integration of Generative Adversarial Networks (GANs) are employed to
mitigate vulnerabilities, resulting in significant improvements in model resilience. The
findings underscore the imperative of developing secure and dependable machine learning
models in healthcare, paving the way for safer and more effective applications in
clinical settings. |
Note de contenu : |
Sommaire
Table of Contents ii
List of Figures iv
List of Tables vi
Table of Abbreviations vii
General Introduction 1
1 State-of-the-art 3
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Overview of Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.1 ML Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.2 ML Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2.3 ML Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.2.4 ML Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.3 Adversarial Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.3.1 Origins of ML Attacks . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.3.2 Attack Surface in ML . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.3.3 AML Defense Methods . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.4 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2 Contribution 24
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.2 Problem Statement, Objectives, and Assumptions . . . . . . . . . . . . . 25
2.3 Proposed Model without Attacks . . . . . . . . . . . . . . . . . . . . . . . 25
2.3.1 Dataset Description . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.3.2 Dataset Splitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.3.3 Dataset Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.3.4 Medical Image Classification . . . . . . . . . . . . . . . . . . . . . . 29
2.4 Proposed Model with Attacks . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.4.1 Adversarial Examples Generation . . . . . . . . . . . . . . . . . . . 31
2.5 Proposed Model with Attack and Defense . . . . . . . . . . . . . . . . . . 32
2.5.1 Adversarial Training . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.5.2 Generative Adversarial Network . . . . . . . . . . . . . . . . . . . 34
2.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3 Experiments and Results 38
3.1 Hardware and Software Environments . . . . . . . . . . . . . . . . . . . . 38
3.2 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.3 Evaluation Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.4 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.4.1 Proposed Model without Attacks . . . . . . . . . . . . . . . . . . . 42
3.4.2 Proposed Model with Attack . . . . . . . . . . . . . . . . . . . . . 47
3.4.3 Proposed Model with Attack and Defense . . . . . . . . . . . . . . 49
3.4.4 Comparison to Related Work . . . . . . . . . . . . . . . . . . . . . 56
3.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 |
Côte titre : |
MAI/0897 |
Security of Machine Learning Systems against Adversarial Settings [texte imprimé] / Dounia Yessaad Cherif, Auteur ; Asma Bouhidel ; Yasmine Harbi, Directeur de thèse . - [S.l.] : Setif:UFA, 2024 . - 1 vol (64 f .) ; 29 cm. Langues : Anglais ( eng)
Catégories : |
Thèses & Mémoires:Informatique
|
Mots-clés : |
Machine Learning
Adversarial attacks
Convolutional neural networks,
Healthcare applications
Medical imaging |
Index. décimale : |
004 - Informatique |
Résumé : |
Machine learning (ML) has gained widespread adoption across various domains due
to its significant advancements in recent years. It has shown remarkable efficacy in
addressing complex tasks, often approaching or surpassing human-level capabilities.
However, recent research has identified vulnerabilities such as adversarial attacks, which
exploit weaknesses in models to induce erroneous predictions. These attacks pose
substantial risks, particularly in critical domains such as autonomous vehicles and
healthcare, underscoring the necessity for robust defenses to ensure the reliability and
safety of machine learning systems.
This thesis introduces a novel approach to bolster the resilience of convolutional
neural network (CNN) models within the healthcare sector. Specifically, it focuses
on enhancing performance in medical image tasks, including diagnosis, prognosis, and
treatment planning. The study evaluates the robustness of CNN models against adversarial
attacks and proposes effective defense strategies. Techniques such as adversarial
training and integration of Generative Adversarial Networks (GANs) are employed to
mitigate vulnerabilities, resulting in significant improvements in model resilience. The
findings underscore the imperative of developing secure and dependable machine learning
models in healthcare, paving the way for safer and more effective applications in
clinical settings. |
Note de contenu : |
Sommaire
Table of Contents ii
List of Figures iv
List of Tables vi
Table of Abbreviations vii
General Introduction 1
1 State-of-the-art 3
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Overview of Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.1 ML Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.2 ML Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2.3 ML Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.2.4 ML Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.3 Adversarial Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.3.1 Origins of ML Attacks . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.3.2 Attack Surface in ML . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.3.3 AML Defense Methods . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.4 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2 Contribution 24
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.2 Problem Statement, Objectives, and Assumptions . . . . . . . . . . . . . 25
2.3 Proposed Model without Attacks . . . . . . . . . . . . . . . . . . . . . . . 25
2.3.1 Dataset Description . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.3.2 Dataset Splitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.3.3 Dataset Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.3.4 Medical Image Classification . . . . . . . . . . . . . . . . . . . . . . 29
2.4 Proposed Model with Attacks . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.4.1 Adversarial Examples Generation . . . . . . . . . . . . . . . . . . . 31
2.5 Proposed Model with Attack and Defense . . . . . . . . . . . . . . . . . . 32
2.5.1 Adversarial Training . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.5.2 Generative Adversarial Network . . . . . . . . . . . . . . . . . . . 34
2.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3 Experiments and Results 38
3.1 Hardware and Software Environments . . . . . . . . . . . . . . . . . . . . 38
3.2 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.3 Evaluation Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.4 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.4.1 Proposed Model without Attacks . . . . . . . . . . . . . . . . . . . 42
3.4.2 Proposed Model with Attack . . . . . . . . . . . . . . . . . . . . . 47
3.4.3 Proposed Model with Attack and Defense . . . . . . . . . . . . . . 49
3.4.4 Comparison to Related Work . . . . . . . . . . . . . . . . . . . . . 56
3.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 |
Côte titre : |
MAI/0897 |
|