Introduction to the Program

You will apply to your projects the most innovative techniques of Deep Learning thanks to this Professional master’s degree 100% online" 

##IMAGE##

TensorFlow has become the most important tool for implementing and learning Deep Learning models. Developers use both its variety of tools and libraries to specialize models that perform automatic object detection, classification and natural language processing tasks. Along the same lines, this platform is useful for detecting anomalies in data, which is essential in areas such as cyber security, predictive maintenance and quality control. However, its use can involve a number of challenges for professionals, including the selection of the appropriate neural network architecture. 

Faced with this situation, TECH implements a Professional master’s degree that will provide experts with a comprehensive approach to Deep Learning. Developed by experts in the field, the curriculum will delve into the mathematical foundations and principles of Deep Learning. This will enable graduates to build Neural Networks aimed at information processing involving pattern recognition, decision making and learning from data. In addition, the syllabus will delve deeper into Reinforcement Learning , taking into account factors such as reward optimization and policy search. In addition, the teaching materials will offer advanced optimization techniques and visualization of results. 

As for the format of the university degree, it is delivered through a 100% online methodology so that graduates can complete the program with ease. To access the academic content they will only need an electronic device with Internet access, since the schedules and evaluation chronograms are planned on an individual basis. On the other hand, the syllabus will be supported by the innovative Relearningteaching innovative system, of which TECH is a pioneer. This learning system consists of the reiteration of key aspects to guarantee the mastery of its different aspects.

Study through innovative multimedia didactic formats that will optimize your Deep Learning update process"

This Professional master’s degree in Deep Learning contains the most complete and up-to-date program on the market. The most important features include:

  • Practical cases studies are presented by experts in Data Engineer and Data Scientist
  • The graphic, schematic and practical contents of the book provide technical and practical information on those disciplines that are essential for professional practice.
  • Practical exercises where the self-assessment process can be carried out to improve learning
  • Its special emphasis on innovative methodologies 
  • Theoretical lessons, questions to the expert, debate forums on controversial topics, and individual reflection assignments
  • Content that is accessible from any fixed or portable device with an Internet connection

Looking to enrich your practice with the most advanced gradient optimization techniques? Achieve it with this program in just 12 months"

The program’s teaching staff includes professionals from the field who contribute their work experience to this educational program, as well as renowned specialists from leading societies and prestigious universities. 

The multimedia content, developed with the latest educational technology, will provide the professional with situated and contextual learning, i.e., a simulated environment that will provide immersive education programmed to learn in real situations. 

This program is designed around Problem-Based Learning, whereby the professional must try to solve the different professional practice situations that arise during the academic year For this purpose, the students will be assisted by an innovative interactive video system created by renowned and experienced experts.  

You will delve into the Backward Pass to calculate the gradients of the loss function with respect to the parameters of the network"

##IMAGE##

Thanks to the Relearning methodology, you will be free to plan both your study schedules and educational timelines"

Syllabus

This Professional master’s degree will offer students a wide range of Deep Learning techniques, which will raise their professional horizons to a higher level. To achieve this, the academic path will delve into the coding of deep learning models. In this way, graduates will be able to effectively translate deep neural network algorithms and architectures. In addition, the course will cover in detail the training of deep neural networks, as well as the visualization of the results and the evaluation of the learning models. Students will also analyze the main Transformer Models, in order to handle them to generate automatic translations.

##IMAGE##

You will apply Deep Learning principles to your projects to solve a variety of complex problems in fields such as image recognition"

Module 1. Mathematical Basis of Deep Learning

1.1. Functions and Derivatives

1.1.1. Linear Functions
1.1.2. Partial Derivative
1.1.3. Higher Order Derivatives

1.2. Multiple Nested Functions

1.2.1. Compound Functions
1.2.2. Inverse Functions
1.2.3. Recursive Functions

1.3. Chain Rule

1.3.1. Derivatives of Nested Functions
1.3.2. Derivatives of Compound Functions
1.3.3. Derivatives of Inverse Functions

1.4. Functions with Multiple Inputs

1.4.1. Multi-variable Functions
1.4.2. Vectorial Functions
1.4.3. Matrix Functions

1.5. Derivatives of Functions with Multiple Inputs

1.5.1. Partial Derivative
1.5.2. Directional Derivatives
1.5.3. Mixed Derivatives

1.6. Functions with Multiple Vector Inputs

1.6.1. Linear Vector Functions
1.6.2. Non-linear Vector Functions
1.6.3. Matrix Vector Functions

1.7. Creating New Functions from Existing Functions

1.7.1. Addition of Functions
1.7.2. Product of Functions
1.7.3. Composition of Functions

1.8. Derivatives of Functions with Multiple Vector Entries

1.8.1. Derivatives of Linear Functions
1.8.2. Derivatives of Nonlinear Functions
1.8.3. Derivatives of Compound Functions

1.9. Vector Functions and their Derivatives: A Step Further

1.9.1. Directional Derivatives
1.9.2. Mixed Derivatives
1.9.3. Matrix Derivatives

1.10. The Backward Pass

1.10.1. Error Propagation
1.10.2. Application of Update Rules
1.10.3. Parameter Optimization

Module 2. Deep Learning Principles

2.1. Supervised Learning

2.1.1. Supervised Learning Machines
2.1.2. Uses of Supervised Learning
2.1.3. Differences between Supervised and Unsupervised Learning

2.2. Supervised Learning Models

2.2.1. Linear Models
2.2.2. Decision Tree Models
2.2.3. Neural Network Models

2.3. Linear Regression

2.3.1. Simple Linear Regression
2.3.2. Multiple Linear Regression
2.3.3. Regression Analysis

2.4. Model Training

2.4.1. Batch Learning
2.4.2. Online Learning
2.4.3. Optimization Methods

2.5. Model Evaluation: Training Set vs. Test Set

2.5.1. Evaluation Metrics
2.5.2. Cross Validation
2.5.3. Comparison of Data Sets

2.6. Model Evaluation: The Code

2.6.1. Prediction Generation
2.6.2. Error Analysis
2.6.3. Evaluation Metrics

2.7. Variables Analysis

2.7.1. Identification of Relevant Variables
2.7.2. Correlation Analysis
2.7.3. Regression Analysis

2.8. Explainability of Neural Network Models

2.8.1. Interpretable Models
2.8.2. Visualization Methods
2.8.3. Evaluation Methods

2.9. Optimization

2.9.1. Optimization Methods
2.9.2. Regularization Techniques
2.9.3. The Use of Graphs

2.10. Hyperparameters

2.10.1. Selection of Hyperparameters
2.10.2. Parameter Search
2.10.3. Hyperparameter Tuning

Module 3. Neural Networks, the Basis of Deep Learning

3.1. Deep Learning

3.1.1. Types of Deep Learning
3.1.2. Applications of Deep Learning
3.1.3. Advantages and Disadvantages of Deep Learning

3.2. Operations

3.2.1. Sum
3.2.2. Product
3.2.3. Transfer

3.3. Layers

3.3.1. Input Layer
3.3.2. Cloak
3.3.3. Output Layer

3.4. Union of Layers and Operations

3.4.1. Architecture Design
3.4.2. Connection between Layers
3.4.3. Forward Propagation

3.5. Construction of the First Neural Network

3.5.1. Network Design
3.5.2. Establish the Weights
3.5.3. Network Training

3.6. Trainer and Optimizer

3.6.1. Optimizer Selection
3.6.2. Establishment of a Loss Function
3.6.3. Establishing a Metric

3.7. Application of the Principles of Neural Networks

3.7.1. Activation Functions
3.7.2. Backward Propagation
3.7.3. Parameter Adjustment

3.8. From Biological to Artificial Neurons

3.8.1. Functioning of a Biological Neuron
3.8.2. Transfer of Knowledge to Artificial Neurons
3.8.3. Establish Relations between the Two

3.9. Implementation of MLP (Multilayer Perceptron) with Keras

3.9.1. Definition of the Network Structure
3.9.2. Model Compilation
3.9.3. Model Training

3.10. Fine Tuning  Hyperparameters of Neural Networks 

3.10.1. Selection of the Activation Function
3.10.2. Set the Learning Rate
3.10.3. Adjustment of Weights

Module 4. Deep Neural Networks Training

4.1. Gradient Problems

4.1.1. Gradient Optimization Techniques
4.1.2. Stochastic Gradients
4.1.3. Weight Initialization Techniques

4.2. Reuse of Pre-Trained Layers

4.2.1. Learning Transfer Training
4.2.2. Feature Extraction
4.2.3. Deep Learning

4.3. Optimizers

4.3.1. Stochastic Gradient Descent Optimizers
4.3.2. Adam and RMSprop Optimizers
4.3.3. Moment Optimizers

4.4. Learning Rate Programming

4.4.1. Automatic Learning Rate Control
4.4.2. Learning Cycles
4.4.3. Smoothing Terms

4.5. Overfitting

4.5.1. Cross Validation
4.5.2. Regularization
4.5.3. Evaluation Metrics

4.6. Practical Guidelines

4.6.1. Model Design
4.6.2. Selection of Metrics and Evaluation Parameters
4.6.3. Hypothesis Testing

4.7. Transfer Learning

4.7.1. Learning Transfer Training
4.7.2. Feature Extraction
4.7.3. Deep Learning

4.8. Data Augmentation

4.8.1. Image Transformations
4.8.2. Synthetic Data Generation
4.8.3. Text Transformation

4.9. Practical Application of Transfer Learning

4.9.1. Learning Transfer Training
4.9.2. Feature Extraction
4.9.3. Deep Learning

4.10. Regularization

4.10.1. L1 and L2
4.10.2. Regularization by Maximum Entropy
4.10.3. Dropout

Module 5. Model Customization and Training with TensorFlow

5.1. TensorFlow

5.1.1. Using the TensorFlow Library
5.1.2. Model Education with TensorFlow
5.1.3. Operations with Graphs in TensorFlow

5.2. TensorFlow and NumPy

5.2.1. NumPy Computational Environment for TensorFlow
5.2.2. Using NumPy Arrays with TensorFlow
5.2.3. NumPy Operations for TensorFlow Graphs

5.3. Model Customization and Training Algorithms

5.3.1. Building Custom Models with TensorFlow
5.3.2. Management of Training Parameters
5.3.3. Use of Optimization Techniques for Training

5.4. TensorFlow Functions and Graphs

5.4.1. Functions with TensorFlow
5.4.2. Use of Graphs for Model Training
5.4.3. Optimization of Graphs with TensorFlow Operations

5.5. Data Loading and Preprocessing with TensorFlow

5.5.1. Loading of Datasets with TensorFlow
5.5.2. Data Preprocessing with TensorFlow
5.5.3. Using TensorFlow Tools for Data Manipulation

5.6. The tf.data API

5.6.1. Using the tf.data API for Data Processing
5.6.2. Constructing Data Streams with tf.data
5.6.3. Use of the tf.data API for Training Models

5.7. The TFRecord Format

5.7.1. Using the TFRecord API for Data Serialization
5.7.2. Loading TFRecord Files with TensorFlow
5.7.3. Using TFRecord Files for Training Models

5.8. Keras Preprocessing Layers

5.8.1. Using the Keras Preprocessing API
5.8.2. Construction of Preprocessing Pipelined with Keras
5.8.3. Using the Keras Preprocessing API for Model Training

5.9. The TensorFlow Datasets Project

5.9.1. Using TensorFlow Datasets for Data Loading
5.9.2. Data Preprocessing with TensorFlow Datasets
5.9.3. Using TensorFlow Datasets for Model Training

5.10. Construction of a Deep Learning Application with TensorFlow. Practical Application

5.10.1. Building a Deep Learning App with TensorFlow
5.10.2. Training a Model with TensorFlow
5.10.3. Use of the Application for the Prediction of Results

Module 6. Deep Computer Vision with Convolutional Neural Networks

6.1. The Cortex Visual Architecture

6.1.1. Functions of the Visual Cortex
6.1.2. Theories of Computational Vision
6.1.3. Models of Image Processing

6.2. Convolutional Layers

6.2.1. Reuse of Weights in Convolution
6.2.2. 2D convolution
6.2.3. Activation Functions

6.3. Grouping Layers and Implementation of Grouping Layers with Keras

6.3.1. Pooling  and Striding
6.3.2. Flattening
6.3.3. Types of  Pooling

6.4. CNN Architecture

6.4.1. VGG Architecture
6.4.2. AlexNet architecture
6.4.3. ResNet Architecture

6.5. Implementation of a ResNet-34 CNN using Keras

6.5.1. Weight Initialization
6.5.2. Input Layer Definition
6.5.3. Output Definition

6.6. Use of Pre-trained Keras Models

6.6.1. Characteristics of Pre-trained Models
6.6.2. Uses of Pre-trained Models
6.6.3. Advantages of Pre-trained Models

6.7. Pre-trained Models for Transfer Learning

6.7.1. Transfer Learning
6.7.2. Transfer Learning Process
6.7.3. Advantages of Transfer Learning

6.8. Classification and Localization in Deep Computer Vision

6.8.1. Image Classification
6.8.2. Localization of Objects in Images
6.8.3. Object Detection

6.9. Object Detection and Object Tracking

6.9.1. Object Detection Methods
6.9.2. Object Tracking Algorithms
6.9.3. Tracking and Localization Techniques

6.10. Semantic Segmentation

6.10.1. Deep Learning for Semantic Segmentation
6.10.2. Edge Detection
6.10.3. Rule-based Segmentation Methods

Module 7. Processing Sequences using RNN (Recurrent Neural Networks) and CNN (Convolutional Neural Networks)

7.1. Recurrent Neurons and Layers

7.1.1. Types of Recurring Neurons
7.1.2. Architecture of a Recurrent Layer
7.1.3. Applications of Recurrent Layers

7.2. Recurrent Neural Network (RNN) Training

7.2.1. Backpropagation over Time (BPTT)
7.2.2. Stochastic Downward Gradient
7.2.3. Regularization in RNN Training

7.3. Evaluation of RNN Models

7.3.1. Evaluation Metrics
7.3.2. Cross Validation
7.3.3. Hyperparameter Tuning

7.4. Prerenal RNNs

7.4.1. Pre-trained Networks
7.4.2. Transfer of Learning
7.4.3. Fine Tuning

7.5. Forecasting a Time Series

7.5.1. Statistical Models for Forecasting
7.5.2. Time Series Models
7.5.3. Models based on Neural Networks

7.6. Interpretation of Time Series Analysis Results

7.6.1. Main Component Analysis
7.6.2. Cluster Analysis
7.6.3. Correlation Analysis

7.7. Handling of Long Sequences

7.7.1. Long Short-Term Memory (LSTM)
7.7.2. Gated Recurrent Units (GRU)
7.7.3. 1D Convolutional

7.8. Partial Sequence Learning

7.8.1. Deep Learning Methods
7.8.2. Generative Models
7.8.3. Reinforcement Learning

7.9. Practical Application of RNN and CNN

7.9.1. Natural Language Processing
7.9.2. Pattern Recognition
7.9.3. Computer Vision

7.10. Differences in Classical Results

7.10.1. Classical vs. RNN Methods
7.10.2. Classical vs. CNN Methods
7.10.3. Difference in Training Time

Module 8. Natural Language Processing (NLP) with Natural Recurrent Networks (NRN) and Attention

8.1. Text Generation Using RNN

8.1.1. Training an RNN for Text Generation
8.1.2. Natural Language Generation with RNN
8.1.3. Text Generation Applications with RNN

8.2. Training Data Set Creation

8.2.1. Preparation of the Data for Training an RNN
8.2.2. Storage of the Training Dataset
8.2.3. Data Cleaning and Transformation

8.3. Sentiment Analysis

8.3.1. Classification of Opinions with RNN
8.3.2. Detection of Themes in Comments
8.3.3. Sentiment Analysis with Deep Learning Algorithms

8.4. Encoder-decoder Network for Neural Machine Translation

8.4.1. Training an RNN for Machine Translation
8.4.2. Use of an Encoder-decoder Network for Machine Translation
8.4.3. Improving the Accuracy of Machine Translation with RNNs

8.5. Attention Mechanisms

8.5.1. Application of Care Mechanisms in RNN
8.5.2. Use of Care Mechanisms to Improve the Accuracy of the Models
8.5.3. Advantages of Attention Mechanisms in Neural Networks

8.6. Transformer Models

8.6.1. Using TransformerModels for Natural Language Processing
8.6.2. Application of Transformer Models for Vision
8.6.3. Advantages of Transformer Models

8.7. Transformers for Vision

8.7.1. Use of Transformer Models for Vision
8.7.2. Image Data Preprocessing
8.7.3. Training of a Transformer model for vision

8.8. Hugging Face Transformer Library

8.8.1. Using the Hugging Face Transformers Library
8.8.2. Application of the Hugging Face Transformers Library
8.8.3. Advantages of the Hugging Face Transformers library

8.9. Other Transformers Libraries. Comparison

8.9.1. Comparison between different TransformersLibraries
8.9.2. Use of the other Transformers Libraries
8.9.3. Advantages of the other Transformers Libraries

8.10. Development of an NLP Application with RNN and Attention. Practical Application

8.10.1. Development of a Natural Language Processing Application with RNN and Attention
8.10.2. Use of RNN, Attention Mechanisms and Transformers Models in the Application
8.10.3. Evaluation of the Practical Application

Module 9. Autoencoders, GANs, and Diffusion Models

9.1. Representation of Efficient Data

9.1.1. Dimensionality Reduction
9.1.2. Deep Learning
9.1.3. Compact Representations

9.2. PCA Realization with an Incomplete Linear Automatic Encoder

9.2.1. Training Process
9.2.2. Implementation in Python
9.2.3. Use of Test Data

9.3. Stacked Automatic Encoders

9.3.1. Deep Neural Networks
9.3.2. Construction of Coding Architectures
9.3.3. Use of Regularization

9.4. Convolutional Autoencoders

9.4.1. Design of Convolutional Models
9.4.2. Convolutional Model Training
9.4.3. Results Evaluation

9.5. Automatic Encoder Denoising

9.5.1. Application of Filters
9.5.2. Design of Coding Models
9.5.3. Use of Regularization Techniques

9.6. Sparse Automatic Encoders

9.6.1. Increasing Coding Efficiency
9.6.2. Minimizing the Number of Parameters
9.6.3. Using Regularization Techniques

9.7. Variational Automatic Encoders

9.7.1. Use of Variational Optimization
9.7.2. Unsupervised Deep Learning
9.7.3. Deep Latent Representations

9.8. Generation of Fashion MNIST Images

9.8.1. Pattern Recognition
9.8.2. Image Generation
9.8.3. Deep Neural Networks Training

9.9. Generative Adversarial Networks and Diffusion Models

9.9.1. Content Generation from Images
9.9.2. Modeling of Data Distributions
9.9.3. Use of Adversarial Networks

9.10. Implementation of the Models. Practical Application

9.10.1. Implementation of the Models
9.10.2. Use of Real Data
9.10.3. Results Evaluation

Module 10. Reinforcement Learning

10.1. Optimization of Rewards and Policy Search

10.1.1. Reward Optimization Algorithms
10.1.2. Policy Search Processes
10.1.3. Reinforcement Learning for Reward Optimization

10.2. OpenAI

10.2.1. OpenAI Gym Environment
10.2.2. Creation of OpenAI Environments
10.2.3. Reinforcement Learning Algorithms in OpenAI

10.3. Neural Network Policies

10.3.1. Convolutional Neural Networks for Policy Search
10.3.2. Deep Learning Policies
10.3.3. Extending Neural Network Policies

10.4. Stock Evaluation: the Credit Allocation Problem

10.4.1. Risk Analysis for Credit Allocation
10.4.2. Estimating the Profitability of Loans
10.4.3. Credit Evaluation Models based on Neural Networks

10.5. Policy Gradients

10.5.1. Reinforcement Learning with Policy Gradients
10.5.2. Optimization of Policy Gradients
10.5.3. Policy Gradient Algorithms

10.6. Markov Decision Processes

10.6.1. Optimization of Markov Decision Processes
10.6.2. Reinforcement Learning for Markov Decision ¨Processes
10.6.3. Models of Markov Decision Processes

10.7. Temporal Difference Learning and Q-Learning

10.7.1. Application of Temporal Differences in Learning
10.7.2. Application of Q-Learning in Learning
10.7.3. Optimization of Q-LearningParameters

10.8. Implementation of Deep Q-Learning and  Deep Q-Learning Variants

10.8.1. Construction of Deep Neural Networks for  Deep Q-Learning
10.8.2. Implementation of  Deep Q-Learning
10.8.3. Variations of  Deep Q-Learning

10.9. Reinforcement Learning Algorithms

10.9.1. Reinforcement Learning Algorithms
10.9.2. Reward Learning Algorithms
10.9.3. Punishment Learning Algorithms

10.10. Design of a Reinforcement Learning Environment. Practical Application

10.10.1. Design of a Reinforcement Learning Environment
10.10.2. Implementation of a Reinforcement  Learning Algorithm
10.10.3. Evaluation of a Reinforcement Learning Algorithm

##IMAGE##

Study from the comfort of your home and update your knowledge online with TECH Global University, the biggest online university in the world”

Professional Master's Degree in Deep Learning

Discover the future of artificial intelligence with the Professional Master's Degree in Deep Learning offered by TECH Global University. This postgraduate program, designed for those looking to advance their understanding and application of deep learning, will immerse you in the fascinating world of deep neural networks and the practical applications of artificial intelligence, all from the comfort of our online classes. As academic leaders in the field, we understand the growing importance of deep learning in today's technology landscape. This Professional Master's degree is designed to provide you with the essential skills needed to develop advanced algorithms, understand complex artificial intelligence models and apply innovative solutions in a variety of fields. Our online classes, taught by experts, will provide you with a quality education relevant to contemporary challenges. You'll explore the latest trends in intelligent algorithm development, complex data analysis and neural network technologies, all while receiving guidance from experienced professionals in the field.

Study Deep Learning from your own home

This Professional Master's Degree not only focuses on theory, but also gives you the opportunity to apply your knowledge in practical projects. Through real-world case studies and applied projects, you will develop a deep and practical understanding of deep learning, preparing you to lead in the application of these technologies in demanding professional environments. At TECH, we are proud to offer a Professional Master's Degree that not only equips you with advanced knowledge in deep learning, but also prepares you to meet the challenges and capitalize on the opportunities in the constant evolution of artificial intelligence. Upon successful completion of the postgraduate program, you will receive a certificate endorsed by the world's best online university, validating your skills and expertise. This Professional Master's Degree not only represents an educational achievement, but also puts you in a prime position to excel in the competitive working world of artificial intelligence. If you are ready to transform your career and explore the frontiers of deep learning, join TECH Global University and open the door to an exciting future in artificial intelligence.