University certificate
The world's largest artificial intelligence faculty”
Introduction to the Program
You will delve into Adversarial Networks to generate the most realistic data thanks to this 100% online university degree”

Computer Vision is a field of Machine Learning of great importance to most technology companies. This technology allows both computers and systems to extract meaningful information from digital images, videos and even other visual inputs. Among its many benefits is the increase in the level of precision during manufacturing processes and the elimination of human error. Therefore, these instruments guarantee the highest quality of products while facilitating the resolution of problems during production.
In view of this reality, TECH is developing a Professional master’s degree that will address Computer Vision in detail. Designed by experts in the field, the curriculum will delve into 3D image processing. In this regard, the program will offer students the most advanced processing software to visualize the data. The syllabus will also focus on Deep Learning analysis, given its relevance in dealing with large and complex data sets. This will allow graduates to enrich their usual work procedures with state-of-the-art algorithms and models. In addition, the teaching materials will provide a wide range of Computer Vision techniques using different frameworks (among which Keras, Tensorflow v2 Pytorch).
As for the format of this university degree, it is based on a 100% online methodology. All that is required is that graduates have an electronic device with Internet access (such as a computer, cell phone or tablet) to access the Virtual Campus. There they will find a library full of multimedia resources with which they will strengthen their knowledge in a dynamic way. It should be noted that TECH employs the innovative Relearningmethodology in all its programs, which will allow students to assimilate knowledge in a natural way, reinforced with audiovisual resources to ensure that it lasts in memory and over time.
You will specialize in a key area of future technology that will immediately advance your career”
This Professional master’s degree in Computer Vision contains the most complete and up-to-date program on the market. The most important features include:
- The development of case studies presented by experts in computer science and computer vision
- The graphic, schematic, and practical contents with which they are created, provide scientific and practical information on the disciplines that are essential for professional practice
- Practical exercises where the self-assessment process can be carried out to improve learning
- Its special emphasis on innovative methodologies
- Theoretical lessons, questions to the expert, debate forums on controversial topics, and individual reflection assignments
- Content that is accessible from any fixed or portable device with an Internet connection
Looking to specialize in Evaluation Metrics? Achieve it with this program in just 12 months"
The program’s teaching staff includes professionals from the sector who contribute their work experience to this educational program, as well as renowned specialists from leading societies and prestigious universities.
The multimedia content, developed with the latest educational technology, will provide the professional with situated and contextual learning, i.e., a simulated environment that will provide immersive education programmed to learn in real situations.
This program is designed around Problem-Based Learning, whereby the professional must try to solve the different professional practice situations that arise during the academic year For this purpose, the students will be assisted by an innovative interactive video system created by renowned and experienced experts.
You will effectively handle Deep Learning to solve the most complex problems"

You will have access to a learning system based on repetition, with natural and progressive teaching throughout the entire syllabus"
Syllabus
This program will provide students with a comprehensive overview of the State of the Art in Artificial Intelligence. Consisting of 10 complete modules, the academic itinerary will address conventional vision algorithms and will offer the latest advances in Deep Learning. The didactic materials will provide the most advanced Computer Vision techniques, so that students can incorporate them immediately into their professional practice. In addition, the syllabus will analyze in detail the Convolutional Networks in order for the graduates to perform a correct classification of the objects in the images.

The program has no fixed schedule and the curriculum is available from day one. Set your own learning pace!"
Module 1. Computer Vision
1.1. Human Perception
1.1.1. Human Visual System
1.1.2. Color
1.1.3. Visible and Non-Visible Frequencies
1.2. Chronicle of the Computer Vision
1.2.1. Principles
1.2.2. Evolution
1.2.3. The Importance of Computer Vision
1.3. Digital Image Composition
1.3.1. The Digital Image
1.3.2. Types of Images
1.3.3. Color Spaces
1.3.4. RGB
1.3.5. HSV and HSL
1.3.6. CMY-CMYK
1.3.7. YCbCr
1.3.8. Indexed Image
1.4. Image Acquisition Systems
1.4.1. Operation of a Digital Camera
1.4.2. The Correct Exposure for Each Situation
1.4.3. Depth of Field
1.4.4. Resolution
1.4.5. Image Formats
1.4.6. HDR Mode
1.4.7. High Resolution Cameras
1.4.8. High-Speed Cameras
1.5. Optical Systems
1.5.1. Optical Principles
1.5.2. Conventional Lenses
1.5.3. Telecentric Lenses
1.5.4. Types of Autofocus Lenses
1.5.5. Focal Length
1.5.6. Depth of Field
1.5.7. Optical Distortion
1.5.8. Calibration of an Image
1.6. Illumination Systems
1.6.1. Importance of Illumination
1.6.2. Frequency Response
1.6.3. LED Illumination
1.6.4. Outdoor Lighting
1.6.5. Types of Lighting for Industrial Applications. Effects
1.7. 3D Capture Systems
1.7.1. Stereo Vision
1.7.2. Triangulation
1.7.3. Structured Light
1.7.4. Time of Flight
1.7.5. Lidar
1.8. Multispectrum
1.8.1. Multispectral Cameras
1.8.2. Hyperspectral Cameras
1.9. Non-Visible Near Spectrum
1.9.1. IR Cameras
1.9.2. UV Cameras
1.9.3. Converting From Non-Visible to Visible by Illumination
1.10. Other Band Spectrums
1.10.1. X-Ray
1.10.2. terahertz
Module 2. Applications and State-of-the-Art
2.1. Industrial Applications
2.1.1. Machine Vision Libraries
2.1.2. Compact Cameras
2.1.3. PC-Based Systems
2.1.4. Industrial Robotics
2.1.5. Pick and Place 2D
2.1.6. Bin Picking
2.1.7. Quality Control
2.1.8. Presence Absence of Components
2.1.9. Dimensional Control
2.1.10. Labeling Control
2.1.11. Traceability
2.2. Autonomous Vehicles
2.2.1. Driver Assistance
2.2.2. Autonomous Driving
2.3. Computer Vision for Content Analysis
2.3.1. Filtering by Content
2.3.2. Visual Content Moderation
2.3.3. Tracking Systems
2.3.4. Brand and Logo Identification
2.3.5. Video Labeling and Classification
2.3.6. Scene Change Detection
2.3.7. Text or Credits Extraction
2.4. Medical Application
2.4.1. Disease Detection and Localization
2.4.2. Cancer and X-Ray Analysis
2.4.3. Advances in Computer Vision given the Covid19
2.4.4. Assistance in the Operating Room
2.5. Spatial Applications
2.5.1. Satellite Image Analysis
2.5.2. Computer Vision for the Study of Space
2.5.3. Mission to Mars
2.6. Commercial Applications
2.6.1. Stock Control
2.6.2. Video Surveillance, Home Security
2.6.3. Parking Cameras
2.6.4. Population Control Cameras
2.6.5. Speed Cameras
2.7. Vision Applied to Robotics
2.7.1. Drones
2.7.2. AGV
2.7.3. Vision in Collaborative Robots
2.7.4. The Eyes of the Robots
2.8. Augmented Reality
2.8.1. Operation
2.8.2. Devices
2.8.3. Applications in the Industry
2.8.4. Commercial Applications
2.9. Cloud Computing
2.9.1. Cloud Computing Platforms
2.9.2. From Cloud Computing to Production
2.10. Research and State-of-the-Art
2.10.1. Commercial Applications
2.10.2. What's Cooking
2.10.3. The Future of Computer Vision
Module 3. Digital Image Processing
3.1. Computer Vision Development Environment
3.1.1. Computer Vision Libraries
3.1.2. Programming Environment
3.1.3. Visualization Tools
3.2. Digital image Processing
3.2.1. Pixel Relationships
3.2.2. Image Operations
3.2.3. Geometric Transformations
3.3. Pixel Operations
3.3.1. Histogram
3.3.2. Histogram Transformations
3.3.3. Operations on Color Images
3.4. Logical and Arithmetic Operations
3.4.1. Addition and Subtraction
3.4.2. Product and Division
3.4.3. And/Nand
3.4.4. Or/Nor
3.4.5. Xor/Xnor
3.5. Filters
3.5.1. Masks and Convolution
3.5.2. Linear Filtering
3.5.3. Non-Linear Filtering
3.5.4. Fourier Analysis
3.6. Morphological Operations
3.6.1. Erosion and Dilation
3.6.2. Closing and Opening
3.6.3. Top_hat and Black hat
3.6.4. Contour Detection
3.6.5. Skeleton
3.6.6. Hole Filling
3.6.7. Convex Hull
3.7. Image Analysis Tools
3.7.1. Edge Detection
3.7.2. Detection of Blobs
3.7.3. Dimensional Control
3.7.4. Color Inspection
3.8. Object Segmentation
3.8.1. Image Segmentation
3.8.2. Classical Segmentation Techniques
3.8.3. Real Applications
3.9. Image Calibration
3.9.1. Image Calibration
3.9.2. Methods of Calibration
3.9.3. Calibration Process in a 2D Camera/Robot System
3.10. Image Processing in a Real Environment
3.10.1. Problem Analysis
3.10.2. Image Processing
3.10.3. Feature Extraction
3.10.4. Final Results
Module 4. Advanced Digital Image Processing
4.1. Optical Character Recognition (OCR)
4.1.1. Image Pre-Processing
4.1.2. Text Detection
4.1.3. Text Recognition
4.2. Code Reading
4.2.1. 1D Codes
4.2.2. 2D Codes
4.2.3. Applications
4.3. Pattern Search
4.3.1. Pattern Search
4.3.2. Patterns Based on Gray Level
4.3.3. Patterns Based on Contours
4.3.4. Patterns Based on Geometric Shapes
4.3.5. Other Techniques
4.4. Object Tracking with Conventional Vision
4.4.1. Background Extraction
4.4.2. Meanshift
4.4.3. Camshift
4.4.4. Optical Flow
4.5. Facial Recognition
4.5.1. Facial Landmark Detection
4.5.2. Applications
4.5.3. Facial Recognition
4.5.4. Emotion Recognition
4.6. Panoramic and Alignment
4.6.1. Stitching
4.6.2. Image Composition
4.6.3. Photomontage
4.7. High Dynamic Range (HDR) and Photometric Stereo
4.7.1. Increasing the Dynamic Range
4.7.2. Image Compositing for Contour Enhancement
4.7.3. Techniques for the Use of Dynamic Applications
4.8. Image Compression
4.8.1. Image Compression
4.8.2. Types of Compressors
4.8.3. Image Compression Techniques
4.9. Video Processing
4.9.1. Image Sequences
4.9.2. Video Formats and Codecs
4.9.3. Reading a Video
4.9.4. Frame Processing
4.10. Real Application of Image Processing
4.10.1. Problem Analysis
4.10.2. Image Processing
4.10.3. Feature Extraction
4.10.4. Final Results
Module 5. 3D Image Processing
5.1. 3D Imaging
5.1.1. 3D Imaging
5.1.2. 3d Image Processing Software and Visualizations
5.1.3. Metrology Software
5.2. Open3D
5.2.1. Library for 3D Data Processing
5.2.2. Features
5.2.3. Installation and Use
5.3. The Data
5.3.1. Depth Maps in 2D Image
5.3.2. Pointclouds
5.3.3. Normal
5.3.4. Surfaces
5.4. Visualization
5.4.1. Data Visualization
5.4.2. Controls
5.4.3. Web Display
5.5. Filters
5.5.1. Distance Between Points, Eliminate Outliers
5.5.2. High Pass Filter
5.5.3. Downsampling
5.6. Geometry and Feature Extraction
5.6.1. Extraction of a Profile
5.6.2. Depth Measurement
5.6.3. Volume
5.6.4. 3D Geometric Shapes
5.6.5. Shots
5.6.6. Projection of a Point
5.6.7. Geometric Distances
5.6.8. Kd Tree
5.6.9. 3D Features
5.7. Registration and Meshing
5.7.1. Concatenation
5.7.2. ICP
5.7.3. Ransac 3D
5.8. 3D Object Recognition
5.8.1. Searching for an Object in the 3d Scene
5.8.2. Segmentation
5.8.3. Bin Picking
5.9. Surface Analysis
5.9.1. Smoothing
5.9.2. Orientable Surfaces
5.9.3. Octree
5.10. Triangulation
5.10.1. From Mesh to Point Cloud
5.10.2. Depth Map Triangulation
5.10.3. Triangulation of Unordered Point Clouds
Module 6. Deep Learning
6.1. Artificial Intelligence
6.1.1. Machine Learning
6.1.2. Deep Learning
6.1.3. The Explosion of Deep Learning Why Now
6.2. Neural Networks
6.2.1. The Neural Network
6.2.2. Uses of Neural Networks
6.2.3. Linear Regression and Perceptron
6.2.4. Forward Propagation
6.2.5. Backpropagation
6.2.6. Feature Vectors
6.3. Loss Functions
6.3.1. Loss Functions
6.3.2. Types of Loss Functions
6.3.3. Choice of Loss Functions
6.4. Activation Functions
6.4.1. Activation Function
6.4.2. Linear Functions
6.4.3. Non-Linear Functions
6.4.4. Output vs. Hidden Layer Activation Functions
6.5. Regularization and Normalization
6.5.1. Regularization and Normalization
6.5.2. Overfitting and Data Augmentation
6.5.3. Regularization Methods: L1, L2 and Dropout
6.5.4. Normalization Methods: Batch, Weight, Layer
6.6. Optimization
6.6.1. Gradient Descent
6.6.2. Stochastic Gradient Descent
6.6.3. Mini Batch Gradient Descent
6.6.4. Momentum
6.6.5. Adam
6.7. Hyperparameter Tuning and Weights
6.7.1. Hyperparameters
6.7.2. Batch Size vs Learning Rate vs Step Decay
6.7.3. Weights
6.8. Evaluation Metrics of a Neural Network
6.8.1. Accuracy
6.8.2. Dice Coefficient
6.8.3. Sensitivity vs Specificity / Recall vs precision
6.8.4. ROC Curve (AUC)
6.8.5. F1-Score
6.8.6. Matrix Confusion
6.8.7. Cross-Validation
6.9. Frameworks and Hardware
6.9.1. Tensor Flow
6.9.2. Pytorch
6.9.3. Caffe
6.9.4. Keras
6.9.5. Hardware for the Training Phase
6.10. Creation of a Neural Network– Training and Validation
6.10.1. Dataset
6.10.2. Network Construction
6.10.3. Education
6.10.4. Visualization of Results
Module 7. Convolutional Neural Networks and Image Classification
7.1. Convolutional Neural Networks
7.1.1. Introduction
7.1.2. Convolution
7.1.3. CNN Building Blocks
7.2. Types of CNN Layers
7.2.1. Convolutional
7.2.2. Activation
7.2.3. Batch Normalization
7.2.4. Polling
7.2.5. Fully Connected
7.3. Metrics
7.3.1. Matrix Confusion
7.3.2. Accuracy
7.3.3. Precision
7.3.4. Recall
7.3.5. F1 Score
7.3.6. ROC Curve
7.3.7. AUC
7.4. Main Architectures
7.4.1. AlexNet
7.4.2. VGG
7.4.3. Resnet
7.4.4. GoogleLeNet
7.5. Image Classification
7.5.1. Introduction
7.5.2. Analysis of Data
7.5.3. Data Preparation
7.5.4. Model Training
7.5.5. Model Validation
7.6. Practical Considerations for CNN Training
7.6.1. Optimizer Selection
7.6.2. Learning Rate Scheduler
7.6.3. Check Training Pipeline
7.6.4. Training with Regularization
7.7. Best Practices in Deep Learning
7.7.1. Transfer Learning
7.7.2. Fine Tuning
7.7.3. Data Augmentation
7.8. Statistical Data Evaluation
7.8.1. Number of Datasets
7.8.2. Number of Labels
7.8.3. Number of Images
7.8.4. Data Balancing
7.9. Deployment
7.9.1. Saving and Loading Models
7.9.2. Onnx
7.9.3. Inference
7.10. Case Study: Image Classification
7.10.1. Data Analysis and Preparation
7.10.2. Testing the Training Pipeline
7.10.3. Model Training
7.10.4. Model Validation
Module 8. Object Detection
8.1. Object Detection and Tracking
8.1.1. Object Detection
8.1.2. Case Uses
8.1.3. Object Tracking
8.1.4. Case Uses
8.1.5. Occlusions, Rigid and Non-Rigid Poses
8.2. Assessment Metrics
8.2.1. IOU - Intersection Over Union
8.2.2. Confidence Score
8.2.3. Recall
8.2.4. Precision
8.2.5. Recall–Precision Curve
8.2.6. Mean Average Precision (mAP)
8.3. Traditional Methods
8.3.1. Sliding Window
8.3.2. Viola Detector
8.3.3. HOG
8.3.4. Non-Maximal Suppresion (NMS)
8.4. Datasets
8.4.1. Pascal VC
8.4.2. MS Coco
8.4.3. ImageNet (2014)
8.4.4. MOTA Challenge
8.5. Two Shot Object Detector
8.5.1. R-CNN
8.5.2. Fast R-CNN
8.5.3. Faster R-CNN
8.5.4. Mask R-CNN
8.6. Single Shot Object Detector
8.6.1. SSD
8.6.2. YOLO
8.6.3. RetinaNet
8.6.4. CenterNet
8.6.5. EfficientDet
8.7. Backbones
8.7.1. VGG
8.7.2. ResNet
8.7.3. Mobilenet
8.7.4. Shufflenet
8.7.5. Darknet
8.8. Object Tracking
8.8.1. Classical Approaches
8.8.2. Particulate Filters
8.8.3. Kalman
8.8.4. Sort Tracker
8.8.5. Deep Sort
8.9. Deployment
8.9.1. Computing Platform
8.9.2. Choice of Backbone
8.9.3. Choice of Framework
8.9.4. Model Optimization
8.9.5. Model Versioning
8.10. Study: People Detection and Tracking
8.10.1. Detection of People
8.10.2. Monitoring of People
8.10.3. Re-Identification
8.10.4. Counting People in Crowds
Module 9. Image Segmentation with Deep Learning
9.1. Object Detection and Segmentation
9.1.1. Semantic Segmentation
9.1.1.1. Semantic Segmentation Use Cases
9.1.2. Instantiated Segmentation
9.1.2.1. Instantiated Segmentation Use Cases
9.2. Evaluation Metrics
9.2.1. Similarities with Other Methods
9.2.2. Pixel Accuracy
9.2.3. Dice Coefficient (F1 Score)
9.3. Cost Functions
9.3.1. Dice Loss
9.3.2. Focal Loss
9.3.3. Tversky Loss
9.3.4. Other Functions
9.4. Traditional Segmentation Methods
9.4.1. Threshold Application with Otsu and Riddlen
9.4.2. Self-Organized Maps
9.4.3. GMM-EM Algorithm
9.5. Semantic Segmentation Applying Deep Learning: FCN
9.5.1. FCN
9.5.2. Architecture
9.5.3. FCN Applications
9.6. Semantic Segmentation Applying Deep Learning: U-NET
9.6.1. U-NET
9.6.2. Architecture
9.6.3. U-NET Application
9.7. Semantic Segmentation Applying Deep Learning: Deep Lab
9.7.1. Deep Lab
9.7.2. Architecture
9.7.3. Deep Lab Application
9.8. Instantiated Segmentation Applying Deep Learning: RCNN Mask
9.8.1. RCNN Mask
9.8.2. Architecture
9.8.3. Application of a Mask RCNN
9.9. Video Segmentation
9.9.1. STFCN
9.9.2. Semantic Video CNNs
9.9.3. Clockwork Convnets
9.9.4. Low-Latency
9.10. Point Cloud Segmentation
9.10.1. The Point Cloud
9.10.2. PointNet
9.10.3. A-CNN
Module 10. Advanced Image Segmentation and Advanced Computer Vision Techniques
10.1. Database for General Segmentation Problems
10.1.1. Pascal Context
10.1.2. CelebAMask-HQ
10.1.3. Cityscapes Dataset
10.1.4. CCP Dataset
10.2. Semantic Segmentation in Medicine
10.2.1. Semantic Segmentation in Medicine
10.2.2. Datasets for Medical Problems
10.2.3. Practical Applications
10.3. Annotation Tools
10.3.1. Computer Vision Annotation Tool
10.3.2. LabelMe
10.3.3. Other Tools
10.4. Segmentation Tools Using Different Frameworks
10.4.1. Keras
10.4.2. Tensorflow v2
10.4.3. Pytorch
10.4.4. Others
10.5. Semantic Segmentation Project. The Data, Phase 1
10.5.1. Problem Analysis
10.5.2. Input Source for Data
10.5.3. Data Analysis
10.5.4. Data Preparation
10.6. Semantic Segmentation Project. Training, Phase 2
10.6.1. Algorithm Selection
10.6.2. Education
10.6.3. Assessment
10.7. Semantic Segmentation Project. Results, Phase 3
10.7.1. Fine Tuning
10.7.2. Presentation of The Solution
10.7.3. Conclusions
10.8. Autoencoders
10.8.1. Autoencoders
10.8.2. Autoencoder Architecture
10.8.3. Noise Elimination Autoencoders
10.8.4. Automatic Coloring Autoencoder
10.9. Generative Adversarial Networks (GANs)
10.9.1. Generative Adversarial Networks (GANs)
10.9.2. DCGAN Architecture
10.9.3. Conditional GAN Architecture
10.10. Enhanced Generative Adversarial Networks
10.10.1. Overview of the Problem
10.10.2. WGAN
10.10.3. LSGAN
10.10.4. ACGAN

Make the most of this opportunity to surround yourself with expert professionals and learn from their work methodology”
Professional Master's Degree in Computer Vision
Welcome to TECH Global University's Professional Master's Degree in Computer Vision, an exceptional postgraduate program designed for professionals looking to delve deeper into the fundamentals and practical applications of artificial intelligence and emerging technology. Our institution prides itself on offering a cutting-edge educational approach, with online classes taught by experts in the field of Computer Vision. This program is carefully designed to provide students with a thorough understanding of the theoretical concepts as well as the practical skills needed to excel in an increasingly technological work environment. Computer vision, as a discipline, triggers innovations in a variety of sectors, from healthcare to manufacturing to automation. This Professional Master's Degree will immerse you in the key aspects of this discipline, addressing topics such as image processing, pattern recognition, and computer vision algorithm development. Through applied projects and real-world case studies, students have the opportunity to apply their knowledge in practical situations, preparing them for the challenges of the professional world.
Get qualified with the best in computer vision
At TECH Global University, we recognize the importance of flexibility in higher education. That's why our virtual campus allows students to access classes and study materials from anywhere, anytime. This flexibility ensures that working professionals can effectively balance their professional and educational responsibilities. Our distinguished faculty are experts in computer vision and technology, committed to guiding students on their educational journey. In addition, we encourage interaction and collaboration among students through virtual platforms, creating an online community that enriches the learning experience. Upon successful completion of the Professional Master's Degree in Computer Vision, TECH Global University graduates will be prepared to lead in the practical application of artificial intelligence in a variety of sectors. Join us and raise your career to new heights by enrolling with us. Get ready to explore the infinite possibilities that artificial intelligence and technology have to offer.