Titre : |
Automatic medical decision for diagnosis of infectious diseases based on artificial intelligence approaches |
Type de document : |
document électronique |
Auteurs : |
Aya Messai, Auteur ; Ahlem Drif, Directeur de thèse |
Editeur : |
Sétif:UFA1 |
Année de publication : |
2025 |
Importance : |
1 vol (141 f.) |
Format : |
29 cm |
Langues : |
Anglais (eng) |
Catégories : |
Thèses & Mémoires:Informatique
|
Mots-clés : |
Black box model
Infectious Diseases
Meningitis diagnosis
Artificial Intelligence (AI) in Clinical Diagnostics
Explainable AI (XAI)
Interpretable Diagno-sis
Trustworthy AI in Healthcare |
Index. décimale : |
004 - Informatique |
Résumé : |
Infectious diseases present complex diagnostic challenges due to the overlapping clinical
manifestations caused by diverse pathogens. Meningitis, in particular, remains a significant
global health concern due to its high morbidity and mortality, especially when
diagnosis and treatment are delayed. Traditional diagnostic methods often involve invasive
procedures and extensive laboratory testing, which can be time-consuming and
resource-intensive. This Ph.D. research investigates the integration of artificial intelligence
(AI) into the diagnostic process, aiming to enhance accuracy, speed, and interpretability
through the use of explainable AI (XAI) techniques.
The first phase of this study examines cerebrospinal fluid (CSF) biomarker variations
across different age groups—children, adults, and the elderly—within various
types of meningitis. By analyzing these patterns, we aim to improve the understanding
of diagnostic and clinical variations and their implications for treatment strategies.
This analysis establishes a foundational understanding of how biomarkers behave in
different populations and infection contexts.
Our next contribution focuses on diagnosing multiple meningitis types using ensemble
models and SHapley Additive exPlanations (SHAP) to interpret feature importance.
Using data from Setif Hospital (Algeria) and Brazil’s SINAN database, we validated
our findings across diverse populations. Extreme Gradient Boosting achieved strong
performance (accuracy: 0.90, AUROC: 0.94, F1-score: 0.98). SHAP revealed distinct
biomarker profiles such as elevated neutrophils in meningococcal, high lymphocytes
in tuberculous, and neutrophil dominance in H. influenzae meningitis, along with clinically
relevant diagnostic patterns. These results highlight the model’s ability to distinguish
bacterial, viral, and pathogen-specific meningitis, increasing trust in AI-driven
diagnostics.
Our third contribution develops specialized models for meningococcal meningitis,
emphasizing local explainability for precise diagnosis. We tested several models on
934 cases, with gradient boosting performing best (accuracy: 0.88, AUROC: 0.93, F1-
score: 0.87). Using XAI tools like ELI5 and LIME, we provided local explanations
that highlighted key diagnostic factors, including Neisseria meningitidis presence, CSF
WBC count, patient age, and neutrophil levels. These insights support clinical trust by
aligning model predictions with medical reasoning.
To enhance AI transparency, we introduced a novel explainable approach that integrates
medical expertise into the interpretation of black-box models. Using concept
vector analysis, we assessed the contribution of symptoms and biomarkers in identifying
pneumococcal meningitis. Our deep learning model showed strong performance
(accuracy: 92.23%, F1-score: 92.98%, AUROC: 92.36%) and remained robust in realworld
validation, correctly identifying most cases with high agreement (Cohen’s Kappa:
0.75). Bio-TCAV revealed clinical signs (0.92), medical history (0.79), and CSF aspect
(0.88) as key influences on predictions, while biomarkers had a moderate effect (0.56).
Tests like PCR, culture, LATEX, and bacterioscopy were most influential (TCAV =
1) aligning with their critical role in real-world meningitis diagnosis. Welch’s t-test
confirmed that these differences in TCAV scores were statistically significant. |
Note de contenu : |
Sommaire
List of Tables vii
1 Explainable AI background: Fundamental theories and literature review. 7
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2 Explainable Artificial Intelligence (XAI) . . . . . . . . . . . . . . . . . . . 8
1.2.1 Making AI understandable to end users . . . . . . . . . . . . . . . 8
1.2.2 Where is XAI crucial? . . . . . . . . . . . . . . . . . . . . . . . . 10
1.2.3 What is “Easily Interpretable”? . . . . . . . . . . . . . . . . . . . 12
1.2.4 Performance and interpretability trade-off . . . . . . . . . . . . . . 13
1.2.5 Interpretability metrics . . . . . . . . . . . . . . . . . . . . . . . . 13
1.2.6 Explainability vs. Inerpretability . . . . . . . . . . . . . . . . . . . 14
1.2.7 Model transparency: White Box vs. Black Box . . . . . . . . . . . 15
1.3 Explainable Artificial Intelligence (XAI): taxonomy and methods . . . . . . 16
1.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.3.2 Ante-Hoc vs. Post-Hoc Interpretability . . . . . . . . . . . . . . . 16
1.3.3 Global vs. Local Explainability . . . . . . . . . . . . . . . . . . . 22
1.3.4 Model-Agnostic vs. Model-Specific Methods . . . . . . . . . . . . 23
1.4 Properties of explanation . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.5 Categories of explanation . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.6 XAI model for infectious diseases diagnosis . . . . . . . . . . . . . . . . . 27
1.6.1 Advancements in clinical decision support systems for diagnosing
Meningitis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1.6.2 Models Explainibility . . . . . . . . . . . . . . . . . . . . . . . . 29
1.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2 Comprehensive review of infectious diseases 35
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.2 Infectious causes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.2.1 Biologic Characteristics of the organism . . . . . . . . . . . . . . . 38
2.2.2 Quantification of infectious diseases . . . . . . . . . . . . . . . . . 40
2.3 Temporal patterns of infectious diseases . . . . . . . . . . . . . . . . . . . 41
2.4 Central Nervous System infections . . . . . . . . . . . . . . . . . . . . . . 42
2.4.1 Meningitis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.4.2 Viral Meningitis . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.4.2.1 Enteroviruses/parechoviruses: . . . . . . . . . . . . . . . 43
2.4.2.2 Herpes Viruses . . . . . . . . . . . . . . . . . . . . . . 44
2.4.2.3 Arboviruses . . . . . . . . . . . . . . . . . . . . . . . . 44
2.4.2.4 Other Viruses . . . . . . . . . . . . . . . . . . . . . . . 45
2.4.3 Bacterial Meningitis . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.4.3.1 Epidemiology . . . . . . . . . . . . . . . . . . . . . . . 46
2.4.4 Differentiation between bacterial and viral Meningitis . . . . . . . 47
2.4.5 Clinical presentation . . . . . . . . . . . . . . . . . . . . . . . . . 48
2.4.6 Diagnostic tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.5 A comprehensive investigation into the ranges of laboratory tests present in
cerebrospinal fluid across various types of meningitis within different age
categories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
2.5.1 Materials and methods . . . . . . . . . . . . . . . . . . . . . . . . 51
2.5.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
2.5.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
2.6 AI in Healthcare . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
2.6.1 Justifying decisions . . . . . . . . . . . . . . . . . . . . . . . . . . 63
2.6.2 Explainability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
2.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3 Towards XAI agnostic explainability to assess differential diagnosis for Meningitis
diseases 66
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
3.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
3.2.1 Data preparation: Study case . . . . . . . . . . . . . . . . . . . . . 68
3.2.2 Data preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . 73
3.2.3 Models investigation . . . . . . . . . . . . . . . . . . . . . . . . . 74
3.2.4 Model agnostic explainibility . . . . . . . . . . . . . . . . . . . . . 75
3.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.3.1 Model validation . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.3.2 XGBoost Global interpretability . . . . . . . . . . . . . . . . . . . 78
3.3.3 Features impact on the Meningitis diagnosis outcome . . . . . . . . 80
3.3.4 Influence of Neutrophil and Lymphocyte Levels on Meningitis Predictions
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
3.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
3.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
3.6 Future directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
4 Transparent AI Models for Meningococcal Meningitis Diagnosis: Evaluating
Interpretability and Performance Metrics 90
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
4.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
4.3 Experiment and Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
4.4 Discussion and conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . 102
5 Does AI model resonate like a medical expert?: A novel concept-based model
explanations for Meningitis diagnosis 104
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
5.2 The proposed methodology . . . . . . . . . . . . . . . . . . . . . . . . . . 105
5.2.1 Domain knowledge-based feature selection . . . . . . . . . . . . . 106
5.2.2 Features engineering and concept definition . . . . . . . . . . . . . 106
5.2.3 Model implementation . . . . . . . . . . . . . . . . . . . . . . . . 108
5.2.4 Bio-TCAV explanation approach for diagnosis . . . . . . . . . . . . 108
5.2.4.1 Activation extraction . . . . . . . . . . . . . . . . . . . . 108
5.2.4.2 Concept classifier training . . . . . . . . . . . . . . . . . 109
5.2.4.3 Concept Activation Vectors (CAVs) . . . . . . . . . . . . 109
5.2.4.4 Reliability and statistical significance . . . . . . . . . . . 109
5.3 Experimental setting and results . . . . . . . . . . . . . . . . . . . . . . . 111
5.3.1 Data analysis exploration . . . . . . . . . . . . . . . . . . . . . . . 112
5.3.2 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . 113
5.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 |
Côte titre : |
DI/0089 |
Automatic medical decision for diagnosis of infectious diseases based on artificial intelligence approaches [document électronique] / Aya Messai, Auteur ; Ahlem Drif, Directeur de thèse . - [S.l.] : Sétif:UFA1, 2025 . - 1 vol (141 f.) ; 29 cm. Langues : Anglais ( eng)
Catégories : |
Thèses & Mémoires:Informatique
|
Mots-clés : |
Black box model
Infectious Diseases
Meningitis diagnosis
Artificial Intelligence (AI) in Clinical Diagnostics
Explainable AI (XAI)
Interpretable Diagno-sis
Trustworthy AI in Healthcare |
Index. décimale : |
004 - Informatique |
Résumé : |
Infectious diseases present complex diagnostic challenges due to the overlapping clinical
manifestations caused by diverse pathogens. Meningitis, in particular, remains a significant
global health concern due to its high morbidity and mortality, especially when
diagnosis and treatment are delayed. Traditional diagnostic methods often involve invasive
procedures and extensive laboratory testing, which can be time-consuming and
resource-intensive. This Ph.D. research investigates the integration of artificial intelligence
(AI) into the diagnostic process, aiming to enhance accuracy, speed, and interpretability
through the use of explainable AI (XAI) techniques.
The first phase of this study examines cerebrospinal fluid (CSF) biomarker variations
across different age groups—children, adults, and the elderly—within various
types of meningitis. By analyzing these patterns, we aim to improve the understanding
of diagnostic and clinical variations and their implications for treatment strategies.
This analysis establishes a foundational understanding of how biomarkers behave in
different populations and infection contexts.
Our next contribution focuses on diagnosing multiple meningitis types using ensemble
models and SHapley Additive exPlanations (SHAP) to interpret feature importance.
Using data from Setif Hospital (Algeria) and Brazil’s SINAN database, we validated
our findings across diverse populations. Extreme Gradient Boosting achieved strong
performance (accuracy: 0.90, AUROC: 0.94, F1-score: 0.98). SHAP revealed distinct
biomarker profiles such as elevated neutrophils in meningococcal, high lymphocytes
in tuberculous, and neutrophil dominance in H. influenzae meningitis, along with clinically
relevant diagnostic patterns. These results highlight the model’s ability to distinguish
bacterial, viral, and pathogen-specific meningitis, increasing trust in AI-driven
diagnostics.
Our third contribution develops specialized models for meningococcal meningitis,
emphasizing local explainability for precise diagnosis. We tested several models on
934 cases, with gradient boosting performing best (accuracy: 0.88, AUROC: 0.93, F1-
score: 0.87). Using XAI tools like ELI5 and LIME, we provided local explanations
that highlighted key diagnostic factors, including Neisseria meningitidis presence, CSF
WBC count, patient age, and neutrophil levels. These insights support clinical trust by
aligning model predictions with medical reasoning.
To enhance AI transparency, we introduced a novel explainable approach that integrates
medical expertise into the interpretation of black-box models. Using concept
vector analysis, we assessed the contribution of symptoms and biomarkers in identifying
pneumococcal meningitis. Our deep learning model showed strong performance
(accuracy: 92.23%, F1-score: 92.98%, AUROC: 92.36%) and remained robust in realworld
validation, correctly identifying most cases with high agreement (Cohen’s Kappa:
0.75). Bio-TCAV revealed clinical signs (0.92), medical history (0.79), and CSF aspect
(0.88) as key influences on predictions, while biomarkers had a moderate effect (0.56).
Tests like PCR, culture, LATEX, and bacterioscopy were most influential (TCAV =
1) aligning with their critical role in real-world meningitis diagnosis. Welch’s t-test
confirmed that these differences in TCAV scores were statistically significant. |
Note de contenu : |
Sommaire
List of Tables vii
1 Explainable AI background: Fundamental theories and literature review. 7
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2 Explainable Artificial Intelligence (XAI) . . . . . . . . . . . . . . . . . . . 8
1.2.1 Making AI understandable to end users . . . . . . . . . . . . . . . 8
1.2.2 Where is XAI crucial? . . . . . . . . . . . . . . . . . . . . . . . . 10
1.2.3 What is “Easily Interpretable”? . . . . . . . . . . . . . . . . . . . 12
1.2.4 Performance and interpretability trade-off . . . . . . . . . . . . . . 13
1.2.5 Interpretability metrics . . . . . . . . . . . . . . . . . . . . . . . . 13
1.2.6 Explainability vs. Inerpretability . . . . . . . . . . . . . . . . . . . 14
1.2.7 Model transparency: White Box vs. Black Box . . . . . . . . . . . 15
1.3 Explainable Artificial Intelligence (XAI): taxonomy and methods . . . . . . 16
1.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.3.2 Ante-Hoc vs. Post-Hoc Interpretability . . . . . . . . . . . . . . . 16
1.3.3 Global vs. Local Explainability . . . . . . . . . . . . . . . . . . . 22
1.3.4 Model-Agnostic vs. Model-Specific Methods . . . . . . . . . . . . 23
1.4 Properties of explanation . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.5 Categories of explanation . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.6 XAI model for infectious diseases diagnosis . . . . . . . . . . . . . . . . . 27
1.6.1 Advancements in clinical decision support systems for diagnosing
Meningitis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1.6.2 Models Explainibility . . . . . . . . . . . . . . . . . . . . . . . . 29
1.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2 Comprehensive review of infectious diseases 35
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.2 Infectious causes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.2.1 Biologic Characteristics of the organism . . . . . . . . . . . . . . . 38
2.2.2 Quantification of infectious diseases . . . . . . . . . . . . . . . . . 40
2.3 Temporal patterns of infectious diseases . . . . . . . . . . . . . . . . . . . 41
2.4 Central Nervous System infections . . . . . . . . . . . . . . . . . . . . . . 42
2.4.1 Meningitis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.4.2 Viral Meningitis . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.4.2.1 Enteroviruses/parechoviruses: . . . . . . . . . . . . . . . 43
2.4.2.2 Herpes Viruses . . . . . . . . . . . . . . . . . . . . . . 44
2.4.2.3 Arboviruses . . . . . . . . . . . . . . . . . . . . . . . . 44
2.4.2.4 Other Viruses . . . . . . . . . . . . . . . . . . . . . . . 45
2.4.3 Bacterial Meningitis . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.4.3.1 Epidemiology . . . . . . . . . . . . . . . . . . . . . . . 46
2.4.4 Differentiation between bacterial and viral Meningitis . . . . . . . 47
2.4.5 Clinical presentation . . . . . . . . . . . . . . . . . . . . . . . . . 48
2.4.6 Diagnostic tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.5 A comprehensive investigation into the ranges of laboratory tests present in
cerebrospinal fluid across various types of meningitis within different age
categories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
2.5.1 Materials and methods . . . . . . . . . . . . . . . . . . . . . . . . 51
2.5.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
2.5.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
2.6 AI in Healthcare . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
2.6.1 Justifying decisions . . . . . . . . . . . . . . . . . . . . . . . . . . 63
2.6.2 Explainability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
2.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3 Towards XAI agnostic explainability to assess differential diagnosis for Meningitis
diseases 66
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
3.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
3.2.1 Data preparation: Study case . . . . . . . . . . . . . . . . . . . . . 68
3.2.2 Data preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . 73
3.2.3 Models investigation . . . . . . . . . . . . . . . . . . . . . . . . . 74
3.2.4 Model agnostic explainibility . . . . . . . . . . . . . . . . . . . . . 75
3.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.3.1 Model validation . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.3.2 XGBoost Global interpretability . . . . . . . . . . . . . . . . . . . 78
3.3.3 Features impact on the Meningitis diagnosis outcome . . . . . . . . 80
3.3.4 Influence of Neutrophil and Lymphocyte Levels on Meningitis Predictions
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
3.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
3.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
3.6 Future directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
4 Transparent AI Models for Meningococcal Meningitis Diagnosis: Evaluating
Interpretability and Performance Metrics 90
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
4.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
4.3 Experiment and Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
4.4 Discussion and conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . 102
5 Does AI model resonate like a medical expert?: A novel concept-based model
explanations for Meningitis diagnosis 104
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
5.2 The proposed methodology . . . . . . . . . . . . . . . . . . . . . . . . . . 105
5.2.1 Domain knowledge-based feature selection . . . . . . . . . . . . . 106
5.2.2 Features engineering and concept definition . . . . . . . . . . . . . 106
5.2.3 Model implementation . . . . . . . . . . . . . . . . . . . . . . . . 108
5.2.4 Bio-TCAV explanation approach for diagnosis . . . . . . . . . . . . 108
5.2.4.1 Activation extraction . . . . . . . . . . . . . . . . . . . . 108
5.2.4.2 Concept classifier training . . . . . . . . . . . . . . . . . 109
5.2.4.3 Concept Activation Vectors (CAVs) . . . . . . . . . . . . 109
5.2.4.4 Reliability and statistical significance . . . . . . . . . . . 109
5.3 Experimental setting and results . . . . . . . . . . . . . . . . . . . . . . . 111
5.3.1 Data analysis exploration . . . . . . . . . . . . . . . . . . . . . . . 112
5.3.2 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . 113
5.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 |
Côte titre : |
DI/0089 |
|