Titre : |
Study of deep learning convergence |
Type de document : |
texte imprimé |
Auteurs : |
Benkhelifa ,Radia, Auteur ; Djaghloul,H, Directeur de thèse |
Editeur : |
Setif:UFA |
Année de publication : |
2019 |
Importance : |
1 vol (49 f .) |
Format : |
29 cm |
Langues : |
Français (fre) |
Catégories : |
Thèses & Mémoires:Mathématique
|
Mots-clés : |
Mathématique |
Index. décimale : |
510 Mathématique |
Résumé : |
This project allowed us to study deep learning methods and how to improve
their performance by tuning dierent parameters in the training data from learning
to prediction steps. Although deep learning has gained a great popularity in vari-
ous application domains and it is more widely used today, it still has some obstacles
and problems, including relatively a huge time of parametrization during the learning
steps and a diculty to select the best neuron architectures for certain problem type
which still an open question.
Deep learning is kind of a self-learning algorithms which is basically depends
on articial neural networks. It has a huge set of techniques such as: deep neural
networks, deep belief networks, recurrent neural networks and convolutional neural
networks which used in such a diverse elds enabling her to grab a great attention
including computer vision, speech recognition, natural language processing, audio
recognition, social network ltering, machine translation and drug design.
In order to improve the performance, we used in practice dierent frameworks
and toolkits each one with a specic paradigm and abstraction level and chooses a
specic changes like the batch size. We have also studied theoretical and practical
studies related to a range of media and criteria used in the process of improvement. It
was concluded that the neural cluster network is strongly correlated with the amount
of information used and the quality of the data provided. In spite of all attempts,
deep learning remains highly relevant to the eld of progress, we cannot general-
ize it or determine the type of special engineering that provides results during the
improvement.
41 |
Note de contenu : |
Sommaire
General Introduction 1
1 Chapter one: Deep learning for convolutional neural networks 2
1.1 Deep learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.1 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.2 Why deep learning . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.3 Deep learning architecture . . . . . . . . . . . . . . . . . . . . 3
1.1.4 Some of Deep learning applications and how it works . . . . . 3
1.1.5 Examples of Deep learning at Work . . . . . . . . . . . . . . . 4
1.2 Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.1 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.2 Why neural networks . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.3 Neural networks types . . . . . . . . . . . . . . . . . . . . . . 8
1.2.4 Neural networks tasks . . . . . . . . . . . . . . . . . . . . . . 9
1.2.5 Neural networks applications . . . . . . . . . . . . . . . . . . . 12
1.3 Convolutional neural networks . . . . . . . . . . . . . . . . . . . . . . 12
1.3.1 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.3.2 Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.3.3 Architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2 Chapter two: Convergence and digital performance 19
2.1 Convergence theorems . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.1.1 Theorem 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.1.2 Theorem 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.1.3 Theorem 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.2 Back-propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.3 Genetic algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.4 Combining Genetic algorithms and Back-propagation: The GA-BP
Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.5 The K-Means algorithm as a gradient descent . . . . . . . . . . . . . 28
2.6 Learning Vector Quantization . . . . . . . . . . . . . . . . . . . . . . 29
2.6.1 A review of LVQ algorithm . . . . . . . . . . . . . . . . . . . 29
2.6.2 Convergence of the LVQ algorithm . . . . . . . . . . . . . . . 29
3 Chapter three: Experimental study of deep learning convergence 33
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.2 Implementation frameworks . . . . . . . . . . . . . . . . . . . . 33
3.2.1 Theano . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.2.2 Tensor
ow . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.2.3 Keras: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.2.4 Frameworks comparison: . . . . . . . . . . . . . . . . . . . . . 35
3.2.5 Impact of the framework on problems diagnosis . . . . . . . . 36
3.3 Better deep learning network . . . . . . . . . . . . . . . . . . . . . . . 36
3.3.1 Improving problems . . . . . . . . . . . . . . . . . . . . . . . . 36
3.3.2 Improving techniques . . . . . . . . . . . . . . . . . . . . . . . 37
3.4 Batch size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Conclusion 41
Bibliography 42
List of |
Côte titre : |
MAM/0374 |
En ligne : |
https://drive.google.com/file/d/1BKXlUiJoeLfKxladvrfLp_mWUJmY_olN/view?usp=shari [...] |
Format de la ressource électronique : |
pdf |
Study of deep learning convergence [texte imprimé] / Benkhelifa ,Radia, Auteur ; Djaghloul,H, Directeur de thèse . - [S.l.] : Setif:UFA, 2019 . - 1 vol (49 f .) ; 29 cm. Langues : Français ( fre)
Catégories : |
Thèses & Mémoires:Mathématique
|
Mots-clés : |
Mathématique |
Index. décimale : |
510 Mathématique |
Résumé : |
This project allowed us to study deep learning methods and how to improve
their performance by tuning dierent parameters in the training data from learning
to prediction steps. Although deep learning has gained a great popularity in vari-
ous application domains and it is more widely used today, it still has some obstacles
and problems, including relatively a huge time of parametrization during the learning
steps and a diculty to select the best neuron architectures for certain problem type
which still an open question.
Deep learning is kind of a self-learning algorithms which is basically depends
on articial neural networks. It has a huge set of techniques such as: deep neural
networks, deep belief networks, recurrent neural networks and convolutional neural
networks which used in such a diverse elds enabling her to grab a great attention
including computer vision, speech recognition, natural language processing, audio
recognition, social network ltering, machine translation and drug design.
In order to improve the performance, we used in practice dierent frameworks
and toolkits each one with a specic paradigm and abstraction level and chooses a
specic changes like the batch size. We have also studied theoretical and practical
studies related to a range of media and criteria used in the process of improvement. It
was concluded that the neural cluster network is strongly correlated with the amount
of information used and the quality of the data provided. In spite of all attempts,
deep learning remains highly relevant to the eld of progress, we cannot general-
ize it or determine the type of special engineering that provides results during the
improvement.
41 |
Note de contenu : |
Sommaire
General Introduction 1
1 Chapter one: Deep learning for convolutional neural networks 2
1.1 Deep learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.1 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.2 Why deep learning . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.3 Deep learning architecture . . . . . . . . . . . . . . . . . . . . 3
1.1.4 Some of Deep learning applications and how it works . . . . . 3
1.1.5 Examples of Deep learning at Work . . . . . . . . . . . . . . . 4
1.2 Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.1 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.2 Why neural networks . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.3 Neural networks types . . . . . . . . . . . . . . . . . . . . . . 8
1.2.4 Neural networks tasks . . . . . . . . . . . . . . . . . . . . . . 9
1.2.5 Neural networks applications . . . . . . . . . . . . . . . . . . . 12
1.3 Convolutional neural networks . . . . . . . . . . . . . . . . . . . . . . 12
1.3.1 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.3.2 Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.3.3 Architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2 Chapter two: Convergence and digital performance 19
2.1 Convergence theorems . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.1.1 Theorem 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.1.2 Theorem 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.1.3 Theorem 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.2 Back-propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.3 Genetic algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.4 Combining Genetic algorithms and Back-propagation: The GA-BP
Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.5 The K-Means algorithm as a gradient descent . . . . . . . . . . . . . 28
2.6 Learning Vector Quantization . . . . . . . . . . . . . . . . . . . . . . 29
2.6.1 A review of LVQ algorithm . . . . . . . . . . . . . . . . . . . 29
2.6.2 Convergence of the LVQ algorithm . . . . . . . . . . . . . . . 29
3 Chapter three: Experimental study of deep learning convergence 33
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.2 Implementation frameworks . . . . . . . . . . . . . . . . . . . . 33
3.2.1 Theano . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.2.2 Tensor
ow . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.2.3 Keras: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.2.4 Frameworks comparison: . . . . . . . . . . . . . . . . . . . . . 35
3.2.5 Impact of the framework on problems diagnosis . . . . . . . . 36
3.3 Better deep learning network . . . . . . . . . . . . . . . . . . . . . . . 36
3.3.1 Improving problems . . . . . . . . . . . . . . . . . . . . . . . . 36
3.3.2 Improving techniques . . . . . . . . . . . . . . . . . . . . . . . 37
3.4 Batch size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Conclusion 41
Bibliography 42
List of |
Côte titre : |
MAM/0374 |
En ligne : |
https://drive.google.com/file/d/1BKXlUiJoeLfKxladvrfLp_mWUJmY_olN/view?usp=shari [...] |
Format de la ressource électronique : |
pdf |
|