University certificate
The world's largest artificial intelligence faculty”
Introduction to the Program
Transform how the world works and lives; learn to automate processes with this Advanced master’s degree 100% online from TECH”

Artificial vision is not only simplifying our daily lives but also paving the way toward a more connected and technological future. Thanks to AI, computer vision systems can analyze images, identify patterns, and make decisions with astonishing accuracy, while robots learn from their environment and perform complex tasks autonomously.
Iconic companies like Boston Dynamics, NVIDIA, and Tesla are leading a revolutionary transformation in the tech industry by merging robotics, computer vision, and Artificial Intelligence to redefine key sectors. An outstanding example is the robots from Boston Dynamics, true marvels of engineering capable of navigating complex terrains and performing tasks with remarkable precision. Tesla, in turn, is setting milestones in mobility with its autonomous driving systems, which use computer vision to interpret and respond to real-time traffic conditions. Similarly, giants like Amazon and Google are also at the forefront, integrating these technologies into logistics through autonomous robots and drones. These companies are not just creating smarter and more autonomous machines, but they are building a future where efficiency and collaboration between humans and machines thrive. The urgent need for professionals focused on overcoming current and incredible stereotypes is the driving force behind this Advanced Master's Degree. This program aims to specialize students in the most advanced knowledge and skills in robotics with an up-to-date curriculum and syllabus.
What makes this program stand out is its 100% online methodology, designed so that students can balance their studies with their daily responsibilities, whether work or family-related. Moreover, it incorporates the innovative Relearning learning method, which adapts to each student's pace and ensures that knowledge is assimilated effectively and durably. Of course, such a high-level specialization requires a faculty of excellence, and this program is no exception. Every detail is carefully designed to create specialists who are prepared to excel in the workforce from day one.
Break all boundaries with Robotics and AI, exploring both on Earth and beyond”
This Advanced master’s degree in Robotics and Computer Vision contains the most complete and up-to-date university program on the market. Its most notable features are:
- The development of practical cases presented by experts in Robotics and Computer Vision
- The graphic, schematic, and practical contents with which they are created, provide scientific and practical information on the disciplines that are essential for professional practice
- Practical exercises where self-assessment can be used to improve learning
- Special emphasis on innovative methodologies in Robotics and Computer Vision
- Theoretical lessons, questions to the expert, debate forums on controversial topics, and individual reflection assignments
- Content that is accessible from any fixed or portable device with an Internet connection
The true technological revolution begins with the most innovative teaching methodology in the current academic landscape”
It includes in its teaching staff professionals belonging to the field of Robotics and Computer Vision, who pour into this program the experience of their work, in addition to recognized specialists from leading companies and prestigious universities.
The multimedia content, developed with the latest educational technology, will provide the professional with situated and contextual learning, i.e., a simulated environment that will provide an immersive learning experience designed to prepare for real-life situations.
This program is designed around Problem-Based Learning, whereby the student must try to solve the different professional practice situations that arise throughout the program. For this purpose, the professional will be assisted by an innovative interactive video system created by renowned and experienced experts.
Free yourself from the toughest challenges of AI advancement by learning through TECH’s unique learning method"

Discover new horizons and dare to explore the unknown with the guidance of a distinguished faculty composed of top professionals"
Syllabus
The syllabus of this university program is designed to shape the leaders of the future in cutting-edge technologies, integrating theory and practice in an innovative and multidisciplinary approach. Learning is enriched with practical applications in high-impact fields such as industrial automation, autonomous vehicles, state-of-the-art drones, and medical robotics. To complement this comprehensive approach, the program includes seminars on ethics, sustainability, and the social impact of these disruptive technologies. Together, this program prepares professionals to lead the technological transformation, paving the way for a more efficient, intelligent, and connected future.

Learn how computer vision is teaching us to view the world in ways we once thought impossible”
Module 1. Robotics. Robot Design and Modeling
1.1. Robotics and Industry 4.0
1.1.1. Robotics and Industry 4.0
1.1.2. Application Fields and Use Cases
1.1.3. Sub-Areas of Specialization in Robotics
1.2. Robot Hardware and Software Architectures
1.2.1. Hardware Architectures and Real-Time
1.2.2. Robot Software Architectures
1.2.3. Communication Models and Middleware Technologies
1.2.4. Robot Operating System (ROS) Software Integration
1.3. Mathematical Modeling of Robots
1.3.1. Mathematical Representation of Rigid Solids
1.3.2. Rotations and Translations
1.3.3. Hierarchical State Representation
1.3.4. Distributed Representation of the State in ROS (TF Library)
1.4. Robot Kinematics and Dynamics
1.4.1. Kinematics
1.4.2. Dynamics
1.4.3. Underactuated Robots
1.4.4. Redundant Robots
1.5. Robot Modeling and Simulation
1.5.1. Robot Modeling Technologies
1.5.2. Robot Modeling with URDF
1.5.3. Robot Simulation
1.5.4. Modeling with Gazebo Simulator
1.6. Robot Manipulators
1.6.1. Types of Manipulator Robots
1.6.2. Kinematics
1.6.3. Dynamics
1.6.4. Simulation
1.7. Land Mobile Robots
1.7.1. Types of Terrestrial Mobile Robots
1.7.2. Kinematics
1.7.3. Dynamics
1.7.4. Simulation
1.8. Aerial Mobile Robots
1.8.1. Types of Aerial Mobile Robots
1.8.2. Kinematics
1.8.3. Dynamics
1.8.4. Simulation
1.9. Aquatic Mobile Robots
1.9.1. Types of Aquatic Mobile Robots
1.9.2. Kinematics
1.9.3. Dynamics
1.9.4. Simulation
1.10. Bioinspired Robots
1.10.1. Humanoids
1.10.2. Robots with Four or More Legs
1.10.3. Modular Robots
1.10.4. Robots with Flexible Parts (Soft-Robotics)
Module 2. Intelligent Agents. Applying Artificial Intelligence to Robots and Softbots
2.1. Intelligent Agents and Artificial Intelligence
2.1.1. Intelligent Robots. Artificial Intelligence
2.1.2. Intelligent Agents
2.1.2.1. Hardware Agents. Robots
2.1.2.2. Software Agents. Softbots
2.1.3. Robotics Applications
2.2. Brain-Algorithm Connection
2.2.1. Biological Inspiration of Artificial Intelligence
2.2.2. Reasoning Implemented in Algorithms. Typology
2.2.3. Explainability of Results in Artificial Intelligence Algorithms.
2.2.4. Evolution of Algorithms up to Deep Learning
2.3. Search Algorithms in the Solution Space
2.3.1. Elements in Solution Space Searches
2.3.2. Solution Space Search Algorithms in Artificial Intelligence Problems
2.3.3. Applications of Search and Optimization Algorithms
2.3.4. Search Algorithms Applied to Machine Learning
2.4. Machine Learning
2.4.1. Machine Learning
2.4.2. Supervised Learning Algorithms
2.4.3. Unsupervised Learning Algorithms
2.4.4. Reinforcement Learning Algorithms
2.5. Supervised Learning
2.5.1. Supervised Learning Methods
2.5.2. Decision Trees for Classification
2.5.3. Support Vector Machines
2.5.4. Artificial Neural Networks
2.5.5. Applications of Supervised Learning
2.6. Unsupervised Learning
2.6.1. Unsupervised Learning
2.6.2. Kohonen Networks
2.6.3. Self-Organizing Maps
2.6.4. K-Means Algorithm
2.7. Reinforcement Learning
2.7.1. Reinforcement Learning
2.7.2. Agents Based on Markov Processes
2.7.3. Reinforcement Learning Algorithms
2.7.4. Reinforcement Learning Applied to Robotics
2.8. Probabilistic Inference
2.8.1. Probabilistic Inference
2.8.2. Types of Inference and Method Definition
2.8.3. Bayesian Inference as a Case Study
2.8.4. Nonparametric Inference Techniques
2.8.5. Gaussian Filters
2.9. From Theory to Practice: Developing a Robotic Intelligent Agent
2.9.1. Inclusion of Supervised Learning Modules in a Robotic Agent
2.9.2. Inclusion of Reinforcement Learning Modules in a Robotic Agent
2.9.3. Architecture of a Robotic Agent Controlled by Artificial Intelligence
2.9.4. Professional Tools for the Implementation of the Intelligent Agent
2.9.5. Phases of the Implementation of AI Algorithms in Robotic Agents
Module 3. Deep Learning
3.1. Artificial Intelligence
3.1.1. Machine Learning
3.1.2. Deep Learning
3.1.3. The Explosion of Deep Learning Why Now
3.2. Neural Networks
3.2.1. The Neural Network
3.2.2. Uses of Neural Networks
3.2.3. Linear Regression and Perceptron
3.2.4. Forward Propagation
3.2.5. Backpropagation
3.2.6. Feature Vectors
3.3. Loss Functions
3.3.1. Loss Functions
3.3.2. Types of Loss Functions
3.3.3. Choice of Loss Functions
3.4. Activation Functions
3.4.1. Activation Function
3.4.2. Linear Functions
3.4.3. Non-Linear Functions
3.4.4. Output vs. Hidden Layer Activation Functions
3.5. Regularization and Normalization
3.5.1. Regularization and Normalization
3.5.2. Overfitting and Data Augmentation
3.5.3. Regularization Methods: L1, L2 and Dropout
3.5.4. Normalization Methods: Batch, Weight, Layer
3.6. Optimization
3.6.1. Gradient Descent
3.6.2. Stochastic Gradient Descent
3.6.3. Mini Batch Gradient Descent
3.6.4. Momentum
3.6.5. Adam
3.7. Hyperparameter Tuning and Weights
3.7.1. Hyperparameters
3.7.2. Batch Size vs. Learning Rate vs. Step Decay
3.7.3. Weights
3.8. Evaluation Metrics of a Neural Network
3.8.1. Accuracy
3.8.2. Dice Coefficient
3.8.3. Sensitivity vs. Specificity / Recall vs. Precision
3.8.4. ROC Curve (AUC)
3.8.5. F1-Score
3.8.6. Matrix Confusion
3.8.7. Cross-Validation
3.9. Frameworks and Hardware
3.9.1. Tensor Flow
3.9.2. Pytorch
3.9.3. Caffe
3.9.4. Keras
3.9.5. Hardware for the Training Phase
3.10. Creation of a Neural Network – Training and Validation
3.10.1. Dataset
3.10.2. Network Construction
3.10.3. Education
3.10.4. Visualization of Results
Module 4. Robotics in the Automation of Industrial Processes
4.1. Design of Automated Systems
4.1.1. Hardware Architectures
4.1.2. Programmable Logic Controllers
4.1.3. Industrial Communication Networks
4.2. Advanced Electrical Design I: Automation
4.2.1. Design of Electrical Panels and Symbology
4.2.2. Power and Control Circuits. Harmonics
4.2.3. Protection and Grounding Elements
4.3. Advanced Electrical Design II: Determinism and Safety
4.3.1. Machine Safety and Redundancy
4.3.2. Safety Relays and Triggers
4.3.3. Safety PLCs
4.3.4. Safe Networks
4.4. Electrical Actuation
4.4.1. Motors and Servomotors
4.4.2. Frequency Inverters and Controllers
4.4.3. Electrically Actuated Industrial Robotics
4.5. Hydraulic and Pneumatic Actuation
4.5.1. Hydraulic Design and Symbology
4.5.2. Pneumatic Design and Symbology
4.5.3. ATEX Environments in Automation
4.6. Transducers in Robotics and Automation
4.6.1. Position and Velocity Measurement
4.6.2. Force and Temperature Measurement
4.6.3. Presence Measurement
4.6.4. Vision Sensors
4.7. Programming and Configuration of Programmable Logic Controllers PLCs
4.7.1. PLC Programming: LD
4.7.2. PLC Programming: ST
4.7.3. PLC Programming: FBD and CFC
4.7.4. PLC Programming: SFC
4.8. Programming and Configuration of Equipment in Industrial Plants
4.8.1. Programming of Drives and Controllers
4.8.2. HMI Scheduling
4.8.3. Programming of Manipulator Robots
4.9. Programming and Configuration of Industrial Computer Equipment
4.9.1. Programming of Vision Systems
4.9.2. SCADA/Software Programming
4.9.3. Network Configuration
4.10. Automation Implementation
4.10.1. State Machine Design
4.10.2. Implementation of State Machines in PLCs
4.10.3. Implementation of Analog PID Control Systems in PLCs
4.10.4. Automation Maintenance and Code Hygiene
4.10.5. Automation and Plant Simulation
Module 5. Automatic Control Systems in Robotics
5.1. Analysis and Design of Nonlinear Systems
5.1.1. Analysis and Modeling of Nonlinear Systems
5.1.2. Feedback Control
5.1.3. Linearization by Feedback
5.2. Design of Control Techniques for Advanced Non-Linear Systems
5.2.1. Sliding Mode Control (Sliding Mode control)
5.2.2. Lyapunov and Backstepping Control
5.2.3. Control Based on Passivity
5.3. Control Architectures
5.3.1. The Robotics Paradigm
5.3.2. Control Architectures
5.3.3. Applications and Examples of Control Architectures
5.4. Motion Control for Robotic Arms
5.4.1. Kinematic and Dynamic Modeling
5.4.2. Control in Joint Space
5.4.3. Control in Operational Space
5.5. Actuator Force Control
5.5.1. Force Control
5.5.2. Impedance Control
5.5.3. Hybrid Control
5.6. Terrestrial Mobile Robots
5.6.1. Equations of Motion
5.6.2. Control Techniques for Terrestrial Robots
5.6.3. Mobile Manipulators
5.7. Aerial Mobile Robots
5.7.1. Equations of Motion
5.7.2. Control Techniques in Aerial Robots
5.7.3. Aerial Manipulation
5.8. Control Based on Machine Learning Techniques
5.8.2. Control Using Supervised Learning
5.8.3. Control Using Reinforced Learning
5.8.4. Control by Unsupervised Learning
5.9. Vision-Based Control
5.9.1. Position-Based Visual Servoing
5.9.2. Image-Based Visual Servoing
5.9.3. Hybrid Visual Servoing
5.10. Predictive Control
5.10.1. Models and State Estimation
5.10.2. MPC Applied to Mobile Robots
5.10.3. MPC Applied to UAVs
Module 6. Planning Algorithms in Robots
6.1. Classical Planning Algorithms
6.1.1. Discrete Planning: State Space
6.1.2. Planning Problems in Robotics. Robotic Systems Models
6.1.3. Classification of Planners
6.2. The Trajectory Planning Problem in Mobile Robots
6.2.1. Forms of Environment Representation: Graphs
6.2.2. Search Algorithms in Graphs
6.2.3. Introduction of Costs in Networks
6.2.4. Search Algorithms in Heavy Networks
6.2.5. Algorithms with any Angle Approach
6.3. Planning in High Dimensional Robotic Systems
6.3.1. High Dimensionality Robotics Problems: Manipulators
6.3.2. Direct/Inverse Kinematic Model
6.3.3. Sampling Planning Algorithms PRM and RRT
6.3.4. Planning Under Dynamic Constraints
6.4. Optimal Sampling Planning
6.4.1. Problem of Sampling-Based Planners
6.4.2. RRT* Probabilistic Optimality Concept
6.4.3. Reconnection Step: Dynamic Constraints
6.4.4. CForest. Parallelizing Planning
6.5. Real Implementation of a Motion Planning System
6.5.1. Global Planning Problem. Dynamic Environments
6.5.2. Cycle of Action, Sensorization. Acquisition of Information from the Environment
6.5.3. Local and Global Planning
6.6. Coordination in Multi-Robot Systems I: Centralized System
6.6.1. Multirobot Coordination Problem
6.6.2. Collision Detection and Resolution: Trajectory Modification with Genetic Algorithms.
6.6.3. Other Bio-Inspired Algorithms: Particle Swarm and Fireworks
6.6.4. Collision Avoidance by Choice of Maneuver Algorithm
6.7. Coordination in Multi-Robot Systems II: Distributed Approaches I
6.7.1. Use of Complex Objective Functions
6.7.2. Pareto Front
6.7.3. Multi-Objective Evolutionary Algorithms
6.8. Coordination in Multirobot Systems III: Distributed Approaches II
6.8.1. Order 1 Planning Systems
6.8.2. ORCA Algorithm
6.8.3. Addition of Kinematic and Dynamic Constraints in ORCA
6.9. Decision Planning Theory
6.9.1. Decision Theory
6.9.2. Sequential Decision Systems
6.9.3. Sensors and Information Spaces
6.9.4. Planning for Uncertainty in Sensing and Actuation
6.10. Reinforcement Learning Planning Systems
6.10.1. Obtaining the Expected Reward of a System
6.10.2. Mean Reward Learning Techniques
6.10.3. Inverse Reinforcement Learning
Module 7. Computer Vision
7.1. Human Perception
7.1.1. Human Visual System
7.1.2. The Color
7.1.3. Visible and Non-Visible Frequencies
7.2. Chronicle of the Computer Vision
7.2.1. Principles
7.2.2. Evolution
7.2.3. The Importance of Computer Vision
7.3. Digital Image Composition
7.3.1. The Digital Image
7.3.2. Types of Images
7.3.3. Color Spaces
7.3.4. RGB
7.3.5. HSV and HSL
7.3.6. CMY-CMYK
7.3.7. YCbCr
7.3.8. Indexed Image
7.4. Image Acquisition Systems
7.4.1. Operation of a Digital Camera
7.4.2. The Correct Exposure for Each Situation
7.4.3. Depth of Field
7.4.4. Resolution
7.4.5. Image Formats
7.4.6. HDR Mode
7.4.7. High Resolution Cameras
7.4.8. High-Speed Cameras
7.5. Optical Systems
7.5.1. Optical Principles
7.5.2. Conventional Lenses
7.5.3. Telecentric Lenses
7.5.4. Types of Autofocus Lenses
7.5.5. Focal Length
7.5.6. Depth of Field
7.5.7. Optical Distortion
7.5.8. Calibration of an Image
7.6. Illumination Systems
7.6.1. Importance of Illumination
7.6.2. Frequency Response
7.6.3. LED Illumination
7.6.4. Outdoor Lighting
7.6.5. Types of Lighting for Industrial Applications. Effects
7.7. 3D Capture Systems
7.7.1. Stereo Vision
7.7.2. Triangulation
7.7.3. Structured Light
7.7.4. Time of Flight
7.7.5. Lidar
7.8. Multispectrum
7.8.1. Multispectral Cameras
7.8.2. Hyperspectral Cameras
7.9. Non-Visible Near Spectrum
7.9.1. IR Cameras
7.9.2. UV Cameras
7.9.3. Converting From Non-Visible to Visible by Illumination
7.10. Other Band Spectrums
7.10.1. X-Ray
7.10.2. Terahertz
Module 8. Applications and State-of-the-Art
8.1. Industrial Applications
8.1.1. Machine Vision Libraries
8.1.2. Compact Cameras
8.1.3. PC-Based Systems
8.1.4. Industrial Robotics
8.1.5. Pick and Place 2D
8.1.6. Bin Picking
8.1.7. Quality Control
8.1.8. Presence Absence of Components
8.1.9. Dimensional Control
8.1.10. Labeling Control
8.1.11. Traceability
8.2. Autonomous Vehicles
8.2.1. Driver Assistance
8.2.2. Autonomous Driving
8.3. Computer Vision for Content Analysis
8.3.1. Filtering by Content
8.3.2. Visual Content Moderation
8.3.3. Tracking Systems
8.3.4. Brand and Logo Identification
8.3.5. Video Labeling and Classification
8.3.6. Scene Change Detection
8.3.7. Text or Credits Extraction
8.4. Medical Application
8.4.1. Disease Detection and Localization
8.4.2. Cancer and X-Ray Analysis
8.4.3. Advances in Computer Vision given Covid19
8.4.4. Assistance in the Operating Room
8.5. Spatial Applications
8.5.1. Satellite Image Analysis
8.5.2. Computer Vision for the Study of Space
8.5.3. Mission to Mars
8.6. Commercial Applications
8.6.1. Stock Control
8.6.2. Video Surveillance, Home Security
8.6.3. Parking Cameras
8.6.4. Population Control Cameras
8.6.5. Speed Cameras
8.7. Vision Applied to Robotics
8.7.1. Drones
8.7.2. AGV
8.7.3. Vision in Collaborative Robots
8.7.4. The Eyes of the Robots
8.8. Augmented Reality
8.8.1. Operation
8.8.2. Devices
8.8.3. Applications in the Industry
8.8.4. Commercial Applications
8.9. Cloud Computing
8.9.1. Cloud Computing Platforms
8.9.2. From Cloud Computing to Production
8.10. Research and State-of-the-Art
8.10.1. Commercial Applications
8.10.2. What’s Cooking
8.10.3. The Future of Computer Vision
Module 9. Computer Vision Techniques in Robotics: Image Processing and Analysis
9.1. Computer Vision
9.1.1. Computer Vision
9.1.2. Elements of a Computer Vision System
9.1.3. Mathematical Tools
9.2. Optical Sensors for Robotics
9.2.1. Passive Optical Sensors
9.2.2. Active Optical Sensors
9.2.3. Non-Optical Sensors
9.3. Image Acquisition
9.3.1. Image Representation
9.3.2. Color Space
9.3.3. Digitizing Process
9.4. Image Geometry
9.4.1. Lens Models
9.4.2. Camera Models
9.4.3. Camera Calibration
9.5. Mathematical Tools
9.5.1. Histogram of an Image
9.5.2. Convolution
9.5.3. Fourier Transform
9.6. Image Preprocessing
9.6.1. Noise Analysis
9.6.2. Image Smoothing
9.6.3. Image Enhancement
9.7. Image Segmentation
9.7.1. Contour-Based Techniques
9.7.3. Histogram-Based Techniques
9.7.4. Morphological Operations
9.8. Image Feature Detection
9.8.1. Point of Interest Detection
9.8.2. Feature Descriptors
9.8.3. Feature Matching
9.9. 3D Vision Systems
9.9.1. 3D Perception
9.9.2. Feature Matching between Images
9.9.3. Multiple View Geometry
9.10. Computer Vision based Localization
9.10.1. The Robot Localization Problem
9.10.2. Visual Odometry
9.10.3. Sensory Fusion
Module 10. Robot Visual Perception Systems with Machine Learning
10.1. Unsupervised Learning Methods applied to Computer Vision
10.1.1. Clustering
10.1.2. PCA
10.1.3. Nearest Neighbors
10.1.4. Similarity and Matrix Decomposition
10.2. Supervised Learning Methods Applied to Computer Vision
10.2.1. “Bag of Words” Concept
10.2.2. Support Vector Machine
10.2.3. Latent Dirichlet Allocation
10.2.4. Neural Networks
10.3. Deep Neural Networks: Structures, Backbones and Transfer Learning
10.3.1. Feature Generating Layers
10.3.1.1. VGG
10.3.1.2. Densenet
10.3.1.3. ResNet
10.3.1.4. Inception
10.3.1.5. GoogLeNet
10.3.2. Transfer Learning
10.3.3. The Data. Preparation for Training
10.4. Computer Vision with Deep Learning I: Detection and Segmentation.
10.4.1. YOLO and SSD Differences and Similarities
10.4.2. Unet
10.4.3. Other Structures
10.5. Computer Vision with Deep Learning II: Generative Adversarial Networks
10.5.1. Image Super-Resolution Using GAN
10.5.2. Creation of Realistic Images
10.5.3. Scene Understanding
10.6. Learning Techniques for Localization and Mapping in Mobile Robotics
10.6.1. Loop Closure Detection and Relocation
10.6.2. Magic Leap. Super Point and Super Glue
10.6.3. Depth from Monocular
10.7. Bayesian Inference and 3D Modeling
10.7.1. Bayesian Models and “Classical” Learning
10.7.2. Implicit Surfaces with Gaussian Processes (GPIS)
10.7.3. 3D Segmentation Using GPIS
10.7.4. Neural Networks for 3D Surface Modeling
10.8. End-to-End Applications of Deep Neural Networks
10.8.1. End-to-End System. Example of Person Identification
10.8.2. Object Manipulation with Visual Sensors
10.8.3. Motion Generation and Planning with Visual Sensors
10.9. Cloud Technologies to Accelerate the Development of Deep Learning Algorithms
10.9.1. Use of GPUs for Deep Learning
10.9.2. Agile Development with Google IColab
10.9.3. Remote GPUs, Google Cloud and AWS
10.10. Deployment of Neural Networks in Real Applications
10.10.1. Embedded Systems
10.10.2. Deployment of Neural Networks. Usage
10.10.3. Network Optimizations in Deployment, Example with TensorRT
Module 11. Visual SLAM. Robot Localization and Simultaneous Mapping Using Computer Vision Techniques
11.1. Simultaneous Localization and Mapping (SLAM)
11.1.1. Simultaneous Localization and Mapping. SLAM
11.1.2. SLAM Applications
11.1.3. SLAM Operation
11.2. Projective Geometry
11.2.1. Pin-Hole Model
11.2.2. Estimation of Intrinsic Parameters of a Chamber
11.2.3. Homography, Basic Principles and Estimation
11.2.4. Fundamental Matrix, Principles and Estimation
11.3. Gaussian Filters
11.3.1. Kalman Filter
11.3.2. Information Filter
11.3.3. Adjustment and Parameterization of Gaussian Filters
11.4. Stereo EKF-SLAM
11.4.1. Stereo Camera Geometry
11.4.2. Feature Extraction and Search
11.4.3. Kalman Filter for Stereo SLAM
11.4.4. Stereo EKF-SLAM Parameter Setting
11.5. Monocular EKF-SLAM
11.5.1. EKF-SLAM Landmark Parameterization
11.5.2. Kalman Filter for Monocular SLAM
11.5.3. Monocular EKF-SLAM Parameter Tuning
11.6. Loop Closure Detection
11.6.1. Brute Force Algorithm
11.6.2. FABMAP
11.6.3. Abstraction Using GIST and HOG
11.6.4. Deep Learning Detection
11.7. Graph-SLAM
11.7.1. Graph-SLAM
11.7.2. RGBD-SLAM
11.7.3. ORB-SLAM
11.8. Direct Visual SLAM
11.8.1. Analysis of the Direct Visual SLAM Algorithm
11.8.2. LSD-SLAM
11.8.3. SVO
11.9. Visual Inertial SLAM
11.9.1. Integration of Inertial Measurements
11.9.2. Low Coupling: SOFT-SLAM
11.9.3. High Coupling: Vins-Mono
11.10. Other SLAM Technologies
11.10.1. Applications Beyond Visual SLAM
11.10.2. Lidar-SLAM
11.10.2. Range-Only SLAM
Module 12. Application of Virtual and Augmented Reality Technologies to Robotics
12.1. Immersive Technologies in Robotics
12.1.1. Virtual Reality in Robotics
12.1.2. Augmented Reality in Robotics
12.1.3. Mixed Reality in Robotics
12.1.4. Difference between Realities
12.2. Construction of Virtual Environments
12.2.1. Materials and Textures
12.2.2. Lighting
12.2.3. Virtual Sound and Smell
12.3. Robot Modeling in Virtual Environments
12.3.1. Geometric Modeling
12.3.2. Physical Modeling
12.3.3. Model Standardization
12.4. Modeling of Robot Dynamics and Kinematics: Virtual Physical Engines
12.4.1. Physical Motors. Typology
12.4.2. Configuration of a Physical Engine
12.4.3. Physical Motors in the Industry
12.5. Platforms, Peripherals and Tools Most Commonly Used in Virtual Reality
12.5.1. Virtual Reality viewers
12.5.2. Interaction Peripherals
12.5.3. Virtual Sensors
12.6. Augmented Reality Systems
12.6.1. Insertion of Virtual Elements into Reality
12.6.2. Types of Visual Markers
12.6.3. Augmented Reality Technologies
12.7. Metaverse: Virtual Environments of Intelligent Agents and People
12.7.1. Avatar Creation
12.7.2. Intelligent Agents in Virtual Environments
12.7.3. Construction of Multi-User Environments for VR/AR
12.8. Creation of Virtual Reality Projects for Robotics
12.8.1. Phases of Development of a Virtual Reality Project
12.8.2. Deployment of Virtual Reality Systems
12.8.3. Virtual Reality Resources
12.9. Creating Augmented Reality Projects for Robotics
12.9.1. Phases of Development of an Augmented Reality Project
12.9.2. Deployment of Augmented Reality Projects
12.9.3. Augmented Reality Resources
12.10. Robot Teleoperation with Mobile Devices
12.10.1. Mixed Reality on Mobile Devices
12.10.2. Immersive Systems using Mobile Device Sensors
12.10.3. Examples of Mobile Projects
Module 13. Robot Communication and Interaction Systems
13.1. Speech Recognition: Stochastic Systems
13.1.1. Acoustic Speech Modeling
13.1.2. Hidden Markov Models
13.1.3. Linguistic Speech Modeling: N-Grams, BNF Grammars
13.2. Speech Recognition: Deep Learning
13.2.1. Deep Neural Networks
13.2.2. Recurrent Neural Networks
13.2.3. LSTM Cells
13.3. Speech Recognition: Prosody and Environmental Effects
13.3.1. Ambient Noise
13.3.2. Multi-Speaker Recognition
13.3.3. Speech Pathologies
13.4. Natural Language Understanding: Heuristic and Probabilistic Systems
13.4.1. Syntactic-Semantic Analysis: Linguistic Rules
13.4.2. Comprehension Based on Heuristic Rules
13.4.3. Probabilistic Systems: Logistic Regression and SVM
13.4.4. Understanding Based on Neural Networks
13.5. Dialogue Management: Heuristic/Probabilistic Strategies
13.5.1. Interlocutor’s Intention
13.5.2. Template-Based Dialog
13.5.3. Stochastic Dialog Management: Bayesian Networks
13.6. Dialogue Management: Advanced Strategies
13.6.1. Reinforcement-Based Learning Systems
13.6.2. Neural Network-Based Systems
13.6.3. From Speech to Intention in a Single Network
13.7. Response Generation and Speech Synthesis
13.7.1. Response Generation: From Idea to Coherent Text
13.7.2. Speech Synthesis by Concatenation
13.7.3. Stochastic Speech Synthesis
13.8. Dialogue Adaptation and Contextualization
13.8.1. Dialogue Initiative
13.8.2. Adaptation to the Speaker
13.8.3. Adaptation to the Context of the Dialogue
13.9. Robots and Social Interactions: Emotion Recognition, Synthesis and Expression.
13.9.1. Artificial Voice Paradigms: Robotic Voice and Natural Voice
13.9.2. Emotion Recognition and Sentiment Analysis
13.9.3. Emotional Voice Synthesis
13.10. Robots and Social Interactions: Advanced Multimodal Interfaces
13.10.1. Combination of Vocal and Tactile Interfaces
13.10.2. Sign Language Recognition and Translation
13.10.3. Visual Avatars: Voice to Sign Language Translation
Module 14. Digital Image Processing
14.1. Computer Vision Development Environment
14.1.1. Computer Vision Libraries
14.1.2. Programming Environment
14.1.3. Visualization Tools
14.2. Digital image Processing
14.2.1. Pixel Relationships
14.2.2. Image Operations
14.2.3. Geometric Transformations
14.3. Pixel Operations
14.3.1. Histogram
14.3.2. Histogram Transformations
14.3.3. Operations on Color Images
14.4. Logical and Arithmetic Operations
14.4.1. Addition and Subtraction
14.4.2. Product and Division
14.4.3. And/Nand
14.4.4. Or/Nor
14.4.5. Xor/Xnor
14.5. Filters
14.5.1. Masks and Convolution
14.5.2. Linear Filtering
14.5.3. Non-Linear Filtering
14.5.4. Fourier Analysis
14.6. Morphological Operations
14.6.1. Erosion and Dilation
14.6.2. Closing and Opening
14.6.3. Top_hat and Black hat
14.6.4. Contour Detection
14.6.5. Skeleton
14.6.6. Hole Filling
14.6.7. Convex Hull
14.7. Image Analysis Tools
14.7.1. Edge Detection
14.7.2. Detection of Blobs
14.7.3. Dimensional Control
14.7.4. Color Inspection
14.8. Object Segmentation
14.8.1. Image Segmentation
14.8.2. Classical Segmentation Techniques
14.8.3. Real Applications
14.9. Image Calibration
14.9.1. Image Calibration
14.9.2. Methods of Calibration
14.9.3. Calibration Process in a 2D Camera/Robot System
14.10. Image Processing in a Real Environment
14.10.1. Problem Analysis
14.10.2. Image Processing
14.10.3. Feature Extraction
14.10.4. Final Results
Module 15. Advanced Digital Image Processing
15.1. Optical Character Recognition (OCR)
15.1.1. Image Pre-Processing
15.1.2. Text Detection
15.1.3. Text Recognition
15.2. Code Reading
15.2.1. 1D Codes
15.2.2. 2D Codes
15.2.3. Applications
15.3. Pattern Search
15.3.1. Pattern Search
15.3.2. Patterns Based on Gray Level
15.3.3. Patterns Based on Contours
15.3.4. Patterns Based on Geometric Shapes
15.3.5. Other Techniques
15.4. Object Tracking with Conventional Vision
15.4.1. Background Extraction
15.4.2. Meanshift
15.4.3. Camshift
15.4.4. Optical Flow
15.5. Facial Recognition
15.5.1. Facial Landmark Detection
15.5.2. Applications
15.5.3. Facial Recognition
15.5.4. Emotion Recognition
15.6. Panoramic and Alignment
15.6.1. Stitching
15.6.2. Image Composition
15.6.3. Photomontage
15.7. High Dynamic Range (HDR) and Photometric Stereo
15.7.1. Increasing the Dynamic Range
15.7.2. Image Compositing for Contour Enhancement
15.7.3. Techniques for the Use of Dynamic Applications
15.8. Image Compression
15.8.1. Image Compression
15.8.2. Types of Compressors
15.8.3. Image Compression Techniques
15.9. Video Processing
15.9.1. Image Sequences
15.9.2. Video Formats and Codecs
15.9.3. Reading a Video
15.9.4. Frame Processing
15.10. Real Application of Image Processing
15.10.1. Problem Analysis
15.10.2. Image Processing
15.10.3. Feature Extraction
15.10.4. Final Results
Module 16. 3D Image Processing
16.1. 3D Imaging
16.1.1. 3D Imaging
16.1.2. 3d Image Processing Software and Visualizations
16.1.3. Metrology Software
16.2. Open3D
16.2.1. Library for 3D Data Processing
16.2.2. Characteristics
16.2.3. Installation and Use
16.3. The Data
16.3.1. Depth Maps in 2D Image
16.3.2. Pointclouds
16.3.3. Normal
16.3.4. Surfaces
16.4. Visualization
16.4.1. Data Visualization
16.4.2. Controls
16.4.3. Web Display
16.5. Filters
16.5.1. Distance Between Points, Eliminate Outliers
16.5.2. High Pass Filter
16.5.3. Downsampling
16.6. Geometry and Feature Extraction
16.6.1. Extraction of a Profile
16.6.2. Depth Measurement
16.6.3. Volume
16.6.4. 3D Geometric Shapes
16.6.5. Shots
16.6.6. Projection of a Point
16.6.7. Geometric Distances
16.6.8. Kd Tree
16.6.9. 3D Features
16.7. Registration and Meshing
16.7.1. Concatenation
16.7.2. ICP
16.7.3. Ransac 3D
16.8. 3D Object Recognition
16.8.1. Searching for an Object in the 3d Scene
16.8.2. Segmentation
16.8.3. Bin Picking
16.9. Surface Analysis
16.9.1. Smoothing
16.9.2. Orientable Surfaces
16.9.3. Octree
16.10. Triangulation
16.10.1. From Mesh to Point Cloud
16.10.2. Depth Map Triangulation
16.10.3. Triangulation of Unordered PointClouds
Module 17. Convolutional Neural Networks and Image Classification
17.1. Convolutional Neural Networks
17.1.1. Introduction
17.1.2. Convolution
17.1.3. CNN Building Blocks
17.2. Types of CNN Layers
17.2.1. Convolutional
17.2.2. Activation
17.2.3. Batch Normalization
17.2.4. Polling
17.2.5. Fully Connected
17.3. Metrics
17.3.1. Matrix Confusion
17.3.2. Accuracy
17.3.3. Precision
17.3.4. Recall
17.3.5. F1 Score
17.3.6. ROC Curve
17.3.7. AUC
17.4. Main Architectures
17.4.1. AlexNet
17.4.2. VGG
17.4.3. Resnet
17.4.4. GoogleLeNet
17.5. Image Classification
17.5.1. Introduction
17.5.2. Analysis of Data
17.5.3. Data Preparation
17.5.4. Model Training
17.5.5. Model Validation
17.6. Practical Considerations for CNN Training
17.6.1. Optimizer Selection
17.6.2. Learning Rate Scheduler
17.6.3. Check Training Pipeline
17.6.4. Training with Regularization
17.7. Best Practices in Deep Learning
17.7.1. Transfer Learning
17.7.2. Fine Tuning
17.7.3. Data Augmentation
17.8. Statistical Data Evaluation
17.8.1. Number of Datasets
17.8.2. Number of Labels
17.8.3. Number of Images
17.8.4. Data Balancing
17.9. Deployment
17.9.1. Saving and Loading Models
17.9.2. Onnx
17.9.3. Inference
17.10. Practical Case: Image Classification
17.10.1. Data Analysis and Preparation
17.10.2. Testing the Training Pipeline
17.10.3. Model Training
17.10.4. Model Validation
Module 18. Object Detection
18.1. Object Detection and Tracking
18.1.1. Object Detection
18.1.2. Use Cases
18.1.3. Object Tracking
18.1.4. Use Cases
18.1.5. Occlusions, Rigid and Non-Rigid Poses
18.2. Assessment Metrics
18.2.1. IOU - Intersection Over Union
18.2.2. Confidence Score
18.2.3. Recall
18.2.4. Precision
18.2.5. Recall – Precision Curve
18.2.6. Mean Average Precision (mAP)
18.3. Traditional Methods
18.3.1. Sliding Window
18.3.2. Viola Detector
18.3.3. HOG
18.3.4. Non-Maximal Suppresion (NMS)
18.4. Datasets
18.4.1. Pascal VC
18.4.2. MS Coco
18.4.3. ImageNet (2014)
18.4.4. MOTA Challenge
18.5. Two Shot Object Detector
18.5.1. R-CNN
18.5.2. Fast R-CNN
18.5.3. Faster R-CNN
18.5.4. Mask R-CNN
18.6. Single Shot Object Detector
18.6.1. SSD
18.6.2. YOLO
18.6.3. RetinaNet
18.6.4. CenterNet
18.6.5. EfficientDet
18.7. Backbones
18.7.1. VGG
18.7.2. ResNet
18.7.3. Mobilenet
18.7.4. Shufflenet
18.7.5. Darknet
18.8. Object Tracking
18.8.1. Classical Approaches
18.8.2. Particulate Filters
18.8.3. Kalman
18.8.4. Sort Tracker
18.8.5. Deep Sort
18.9. Deployment
18.9.1. Computing Platform
18.9.2. Choice of Backbone
18.9.3. Choice of Framework
18.9.4. Model Optimization
18.9.5. Model Versioning
18.10. Study: People Detection and Tracking
18.10.1. Detection of People
18.10.2. Monitoring of People
18.10.3. Re-Identification
18.10.4. Counting People in Crowds
Module 19. Image Segmentation with Deep Learning
19.1. Object Detection and Segmentation
19.1.1. Semantic Segmentation
19.1.1.1. Semantic Segmentation Use Cases
19.1.2. Instantiated Segmentation
19.1.2.1. Instantiated Segmentation Use Cases
19.2. Evaluation Metrics
19.2.1. Similarities with Other Methods
19.2.2. Pixel Accuracy
19.2.3. Dice Coefficient (F1 Score)
19.3. Cost Functions
19.3.1. Dice Loss
19.3.2. Focal Loss
19.3.3. Tversky Loss
19.3.4. Other Functions
19.4. Traditional Segmentation Methods
19.4.1. Threshold Application with Otsu and Riddlen
19.4.2. Self-Organized Maps
19.4.3. GMM-EM Algorithm
19.5. Semantic Segmentation Applying Deep Learning: FCN
19.5.1. FCN
19.5.2. Architecture
19.5.3. FCN Applications
19.6. Semantic Segmentation Applying Deep Learning: U-NET
19.6.1. U-NET
19.6.2. Architecture
19.6.3. U-NET Application
19.7. Semantic Segmentation Applying Deep Learning: Deep Lab
19.7.1. Deep Lab
19.7.2. Architecture
19.7.3. Deep Lab Application
19.8. Instantiated Segmentation Applying Deep Learning: RCNN Mask
19.8.1. RCNN Mask
19.8.2. Architecture
19.8.3. Application of a RCNN Mask
19.9. Video Segmentation
19.9.1. STFCN
19.9.2. Semantic Video CNNs
19.9.3. Clockwork Convnets
19.9.4. Low-Latency
19.10. Point Cloud Segmentation
19.10.1. The Point Cloud
19.10.2. PointNet
19.10.3. A-CNN
Module 20. Advanced Image Segmentation and Advanced Computer Vision Techniques
20.1. Database for General Segmentation Problems
20.1.1. Pascal Context
20.1.2. CelebAMask-HQ
20.1.3. Cityscapes Dataset
20.1.4. CCP Dataset
20.2. Semantic Segmentation in Medicine
20.2.1. Semantic Segmentation in Medicine
20.2.2. Datasets for Medical Problems
20.2.3. Practical Application
20.3. Annotation Tools
20.3.1. Computer Vision Annotation Tool
20.3.2. LabelMe
20.3.3. Other Tools
20.4. Segmentation Tools Using Different Frameworks
20.4.1. Keras
20.4.2. Tensorflow v2
20.4.3. Pytorch
20.4.4. Other
20.5. Semantic Segmentation Project. The Data, Phase 1
20.5.1. Problem Analysis
20.5.2. Input Source for Data
20.5.3. Data Analysis
20.5.4. Data Preparation
20.6. Semantic Segmentation Project. Training, Phase 2
20.6.1. Algorithm Selection
20.6.2. Education
20.6.3. Evaluation
20.7. Semantic Segmentation Project. Results, Phase 3
20.7.1. Fine Tuning
20.7.2. Presentation of The Solution
20.7.3. Conclusions
20.8. Autoencoders
20.8.1. Autoencoders
20.8.2. Autoencoder Architecture
20.8.3. Noise Elimination Autoencoders
20.8.4. Automatic Coloring Autoencoder
20.9. Generative Adversarial Networks (GANs)
20.9.1. Generative Adversarial Networks (GANs)
20.9.2. DCGAN Architecture
20.9.3. Conditional GAN Architecture
20.10. Enhanced Generative Adversarial Networks
20.10.1. Overview of the Problem
20.10.2. WGAN
20.10.3. LSGAN
20.10.4. ACGAN

The true magic of Artificial Intelligence lies in how it transforms data into knowledge and knowledge into action”