Introduction to the Program

Master Virtual Reality and Computer Vision to stand out as a leader in this innovative field”

Virtual Reality and Computer Vision are revolutionizing key sectors such as entertainment, healthcare, industry, and research. Their impact on society, with innovative applications such as medical diagnostics, immersive video game creation, and industrial process automation, reinforces the need for highly trained experts. This Advanced master’s degree, designed by TECH Global University, responds to this demand by developing professionals in the use of advanced technologies such as Digital Image Processing, Deep Learning, and Convolutional Networks, which are essential for leading innovation projects.

The program comprehensively covers the technical fundamentals, from the creation of 3D environments and character design to the implementation of advanced algorithms. In addition, students acquire skills in leading tools such as Unity 3D, ZBrush, and 3D Max, mastering the design and programming of immersive solutions applicable in various sectors.

One of the main advantages of this program is its 100% online format, which allows students to balance their studies with work and personal responsibilities. Thanks to access to the Virtual Campus, participants have the flexibility to manage their learning independently and access updated content from any device connected to the Internet.

Drive success by applying Virtual Reality and Computer Vision in key sectors of the economy”

This Advanced master’s degree in Virtual Reality and Computer Vision contains the most complete and up-to-date program on the market. The most important features include:

  • Development of practical cases presented by experts in Virtual Reality and Computer Vision 
  • The graphic, schematic, and practical contents with which they are created, provide scientific and practical information on the disciplines that are essential for professional practice
  • Practical exercises where self-assessment can be used to improve learning
  • Special emphasis on innovative methodologies in the management of Virtual Reality and Computer Vision industries
  • Theoretical lessons, questions to the expert, debate forums on controversial topics, and individual reflection assignments
  • Content that is accessible from any fixed or portable device with an Internet connection

Consolidate your skills through practical case studies and interactive resources designed to apply advanced concepts in Virtual Reality and Computer Vision”

The teaching staff includes professionals from the field of Virtual Reality and Computer Vision, who bring their work experience to this program, as well as renowned specialists from leading companies and prestigious universities.

The multimedia content, developed with the latest educational technology, will provide the professional with situated and contextual learning, i.e., a simulated environment that will provide an immersive learning experience designed to prepare for real-life situations.

This program is designed around Problem-Based Learning, whereby the student must try to solve the different professional practice situations that arise throughout the program. For this purpose, the professional will be assisted by an innovative interactive video system created by renowned and experienced experts. 

Access the most innovative techniques thanks to a practical and up-to-date approach that integrates tools such as Deep Learning, image processing, and 3D modeling”

Study without restrictions with a 100% online program that allows you to learn from anywhere and adapt your pace to your daily needs”

Syllabus

The materials for this university program have been developed by experts in Artificial Intelligence and advanced technologies. The syllabus delves into areas such as Digital Image Processing, Deep Learning, and Convolutional Networks, offering a comprehensive approach to tackling the most complex challenges in these fields. In addition, the program includes specific modules on 3D design and modeling, video game creation, and immersive environment development.

You will learn to implement innovative solutions and develop high-impact interactive experiences, preparing you to lead technology projects in emerging industries”

Module 1. Computer Vision

1.1. Human Perception

1.1.1. Human Visual System
1.1.2. The Color
1.1.3. Visible and Non-Visible Frequencies

1.2. Chronicle of the Computer Vision

1.2.1. Principles
1.2.2. Evolution
1.2.3. The Importance of Computer Vision

1.3. Digital Image Composition

1.3.1. The Digital Image
1.3.2. Types of Images
1.3.3. Color Spaces
1.3.4. RGB
1.3.5. HSV and HSL
1.3.6. CMY-CMYK
1.3.7. YCbCr
1.3.8. Indexed Image

1.4. Image Acquisition Systems

1.4.1. Operation of a Digital Camera
1.4.2. The Correct Exposure for Each Situation
1.4.3. Depth of Field
1.4.4. Resolution
1.4.5. Image Formats
1.4.6. HDR Mode
1.4.7. High Resolution Cameras
1.4.8. High-Speed Cameras

1.5. Optical Systems

1.5.1. Optical Principles
1.5.2. Conventional Lenses
1.5.3. Telecentric Lenses
1.5.4. Types of Autofocus Lenses
1.5.5. Focal Length
1.5.6. Depth of Field
1.5.7. Optical Distortion
1.5.8. Calibration of an Image

1.6. Illumination Systems

1.6.1. Importance of Illumination
1.6.2. Frequency Response
1.6.3. LED Illumination
1.6.4. Outdoor Lighting
1.6.5. Types of Lighting for Industrial Applications. Effects

1.7. 3D Capture Systems

1.7.1. Stereo Vision
1.7.2. Triangulation
1.7.3. Structured Light
1.7.4. Time of Flight
1.7.5. Lidar

1.8. Multispectrum

1.8.1. Multispectral Cameras
1.8.2. Hyperspectral Cameras

1.9. Non-Visible Near Spectrum

1.9.1. IR Cameras
1.9.2. UV Cameras
1.9.3. Converting From Non-Visible to Visible by Illumination

1.10. Other Band Spectrums

1.10.1. X-Ray
1.10.2. Terahertz

Module 2. Applications and State-of-the-Art

2.1. Industrial Applications

2.1.1. Machine Vision Libraries
2.1.2. Compact Cameras
2.1.3. PC-Based Systems
2.1.4. Industrial Robotics
2.1.5. Pick and Place 2D
2.1.6. Bin Picking
2.1.7. Quality Control
2.1.8. Presence Absence of Components
2.1.9. Dimensional Control
2.1.10. Labeling Control
2.1.11. Traceability

2.2. Autonomous Vehicles

2.2.1. Driver Assistance
2.2.2. Autonomous Driving

2.3. Computer Vision for Content Analysis

2.3.1. Filtering by Content
2.3.2. Visual Content Moderation
2.3.3. Tracking Systems
2.3.4. Brand and Logo Identification
2.3.5. Video Labeling and Classification
2.3.6. Scene Change Detection
2.3.7. Text or Credits Extraction

2.4. Medical Application

2.4.1. Disease Detection and Localization
2.4.2. Cancer and X-Ray Analysis
2.4.3. Advances in Computer Vision given COVID-19
2.4.4. Assistance in the Operating Room

2.5. Spatial Applications

2.5.1. Satellite Image Analysis
2.5.2. Computer Vision for the Study of Space
2.5.3. Mission to Mars

2.6. Commercial Applications

2.6.1. Stock Control
2.6.2. Video Surveillance, Home Security
2.6.3. Parking Cameras
2.6.4. Population Control Cameras
2.6.5. Speed Cameras

2.7. Vision Applied to Robotics

2.7.1. Drones
2.7.2. AGV
2.7.3. Vision in Collaborative Robots
2.7.4. The Eyes of the Robots

2.8. Augmented Reality

2.8.1. How It Works
2.8.2. Devices
2.8.3. Applications in the Industry
2.8.4. Commercial Applications

2.9. Cloud Computing

2.9.1. Cloud Computing Platforms
2.9.2. From Cloud Computing to Production

2.10. Research and State-of-the-Art

2.10.1. Commercial Applications
2.10.2. What’s Cooking
2.10.3. The Future of Computer Vision

Module 3. Digital Image Processing

3.1. Computer Vision Development Environment

3.1.1. Computer Vision Libraries
3.1.2. Programming Environment
3.1.3. Visualization Tools

3.2. Digital image Processing

3.2.1. Pixel Relationships
3.2.2. Image Operations
3.2.3. Geometric Transformations

3.3. Pixel Operations

3.3.1. Histogram
3.3.2. Histogram Transformations
3.3.3. Operations on Color Images

3.4. Logical and Arithmetic Operations

3.4.1. Addition and Subtraction
3.4.2. Product and Division
3.4.3. And/Nand
3.4.4. Or/Nor
3.4.5. Xor/Xnor

3.5. Filters

3.5.1. Masks and Convolution
3.5.2. Linear Filtering
3.5.3. Non-Linear Filtering
3.5.4. Fourier Analysis

3.6. Morphological Operations

3.6.1. Erosion and Dilation
3.6.2. Closing and Opening
3.6.3. Top_hat and Black hat
3.6.4. Contour Detection
3.6.5. Skeleton
3.6.6. Hole Filling
3.6.7. Convex Hull

3.7. Image Analysis Tools

3.7.1. Edge Detection
3.7.2. Detection of Blobs
3.7.3. Dimensional Control
3.7.4. Color Inspection

3.8. Object Segmentation

3.8.1. Image Segmentation
3.8.2. Classical Segmentation Techniques
3.8.3. Real Applications

3.9. Image Calibration

3.9.1. Image Calibration
3.9.2. Methods of Calibration
3.9.3. Calibration Process in a 2D Camera/Robot System

3.10. Image Processing in a Real Environment

3.10.1. Problem Analysis
3.10.2. Image Processing
3.10.3. Feature Extraction
3.10.4. Final Results

Module 4. Digital Image Processing

4.1. Optical Character Recognition (OCR)

4.1.1. Image Pre-Processing
4.1.2. Text Detection
4.1.3. Text Recognition

4.2. Code Reading

4.2.1. 1D Codes
4.2.2. 2D Codes
4.2.3. Applications

4.3. Pattern Search

4.3.1. Pattern Search
4.3.2. Patterns Based on Gray Level
4.3.3. Patterns Based on Contours
4.3.4. Patterns Based on Geometric Shapes
4.3.5. Other Techniques

4.4. Object Tracking with Conventional Vision

4.4.1. Background Extraction
4.4.2. Meanshift
4.4.3. Camshift
4.4.4. Optical Flow

4.5. Facial Recognition

4.5.1. Facial Landmark Detection
4.5.2. Applications
4.5.3. Facial Recognition
4.5.4. Emotion Recognition

4.6. Panoramic and Alignment

4.6.1. Stitching
4.6.2. Image Composition
4.6.3. Photomontage

4.7. High Dynamic Range (HDR) and Photometric Stereo

4.7.1. Increasing the Dynamic Range
4.7.2. Image Compositing for Contour Enhancement
4.7.3. Techniques for the Use of Dynamic Applications

4.8. Image Compression

4.8.1. Image Compression
4.8.2. Types of Compressors
4.8.3. Image Compression Techniques

4.9. Video Processing

4.9.1. Image Sequences
4.9.2. Video Formats and Codecs
4.9.3. Reading a Video
4.9.4. Frame Processing

4.10. Real Application of Image Processing

4.10.1. Problem Analysis
4.10.2. Image Processing
4.10.3. Feature Extraction
4.10.4. Final Results

Module 5. 3D Image Processing

5.1. 3D Imaging

5.1.1. 3D Imaging
5.1.2. 3d Image Processing Software and Visualizations
5.1.3. Metrology Software

5.2. Open3D

5.2.1. Library for 3D Data Processing
5.2.2. Characteristics
5.2.3. Installation and Use

5.3. The Data

5.3.1. Depth Maps in 2D Image
5.3.2. Pointclouds
5.3.3. Normal
5.3.4. Surfaces

5.4. Visualization

5.4.1. Data Visualization
5.4.2. Controls
5.4.3. Web Display

5.5. Filters

5.5.1. Distance Between Points, Eliminate Outliers
5.5.2. High Pass Filter
5.5.3. Downsampling

5.6. Geometry and Feature Extraction

5.6.1. Extraction of a Profile
5.6.2. Depth Measurement
5.6.3. Volume
5.6.4. 3D Geometric Shapes
5.6.5. Shots
5.6.6. Projection of a Point
5.6.7. Geometric Distances
5.6.8. Kd Tree
5.6.9. 3D Features

5.7. Registration and Meshing

5.7.1. Concatenation
5.7.2. ICP
5.7.3. Ransac 3D

5.8. 3D Object Recognition

5.8.1. Searching for an Object in the 3d Scene
5.8.2. Segmentation
5.8.3. Bin Picking

5.9. Surface Analysis

5.9.1. Smoothing
5.9.2. Orientable Surfaces
5.9.3. Octree

5.10. Triangulation

5.10.1. From Mesh to Point Cloud
5.10.2. Depth Map Triangulation
5.10.3. Triangulation of Unordered PointClouds

Module 6. Deep Learning

6.1. Artificial Intelligence

6.1.1. Machine Learning
6.1.2. Deep Learning
6.1.3. The Explosion of Deep Learning Why Now

6.2. Neural Networks

6.2.1. The Neural Network
6.2.2. Uses of Neural Networks
6.2.3. Linear Regression and Perception
6.2.4. Forward Propagation
6.2.5. Backpropagation
6.2.6. Feature Vectors

6.3. Loss Functions

6.3.1. Loss Functions
6.3.2. Types of Loss Functions
6.3.3. Choice of Loss Functions

6.4. Activation Functions

6.4.1. Activation Function
6.4.2. Linear Functions
6.4.3. Non-Linear Functions
6.4.4. Output vs. Hidden Layer Activation Functions

6.5. Regularization and Normalization

6.5.1. Regularization and Normalization
6.5.2. Overfitting and Data Augmentation
6.5.3. Regularization Methods: L1, L2 and Dropout
6.5.4. Normalization Methods: Batch, Weight, Layer

6.6. Optimization

6.6.1. Gradient Descent
6.6.2. Stochastic Gradient Descent
6.6.3. Mini Batch Gradient Descent
6.6.4. Momentum
6.6.5. Adam

6.7. Hyperparameter Tuning and Weights

6.7.1. Hyperparameters
6.7.2. Batch Size vs. Learning Rate vs. Step Decay
6.7.3. Weights

6.8. Evaluation Metrics of a Neural Network

6.8.1. Accuracy
6.8.2. Dice Coefficient
6.8.3. Sensitivity vs. Specificity / Recall vs. Precision
6.8.4. ROC Curve (AUC)
6.8.5. F1-Score
6.8.6. Matrix Confusion
6.8.7. Cross-Validation

6.9. Frameworks and Hardware

6.9.1. Tensor Flow
6.9.2. Pytorch
6.9.3. Caffe
6.9.4. Keras
6.9.5. Hardware for the Training Phase

6.10. Creation of a Neural Network – Training and Validation

6.10.1. Dataset
6.10.2. Network Construction
6.10.3. Education
6.10.4. Visualization of Results

Module 7. Convolutional Neural Networks and Image Classification

7.1. Convolutional Neural Networks

7.1.1. Introduction
7.1.2. Convolution
7.1.3. CNN Building Blocks

7.2. Types of CNN Layers

7.2.1. Convolutional
7.2.2. Activation
7.2.3. Batch Normalization
7.2.4. Polling
7.2.5. Fully Connected

7.3. Metrics

7.3.1. Matrix Confusion
7.3.2. Accuracy
7.3.3. Precision
7.3.4. Recall
7.3.5. F1 Score
7.3.6. ROC Curve
7.3.7. AUC

7.4. Main Architectures

7.4.1. AlexNet
7.4.2. VGG
7.4.3. Resnet
7.4.4. GoogleLeNet

7.5. Image Classification

7.5.1. Introduction
7.5.2. Analysis of Data
7.5.3. Data Preparation
7.5.4. Model Training
7.5.5. Model Validation

7.6. Practical Considerations for CNN Training

7.6.1. Optimizer Selection
7.6.2. Learning Rate Scheduler
7.6.3. Check Training Pipeline
7.6.4. Training with Regularization

7.7. Best Practices in Deep Learning

7.7.1. Transfer Learning
7.7.2. Fine Tuning
7.7.3. Data Augmentation

7.8. Statistical Data Evaluation

7.8.1. Number of Datasets
7.8.2. Number of Labels
7.8.3. Number of Images
7.8.4. Data Balancing

7.9. Deployment

7.9.1. Saving and Loading Models
7.9.2. Onnx
7.9.3. Inference

7.10. Case Study: Image Classification

7.10.1. Data Analysis and Preparation
7.10.2. Testing the Training Pipeline
7.10.3. Model Training
7.10.4. Model Validation

Module 8. Object Detection

8.1. Object Detection and Tracking

8.1.1. Object Detection
8.1.2. Case Studies
8.1.3. Object Tracking
8.1.4. Case Studies
8.1.5. Occlusions, Rigid and Non-Rigid Poses

8.2. Assessment Metrics

8.2.1. IOU - Intersection Over Union
8.2.2. Confidence Score
8.2.3. Recall
8.2.4. Precision
8.2.5. Recall-Precision Curve
8.2.6. Mean Average Precision (mAP)

8.3. Traditional Methods

8.3.1. Sliding Window
8.3.2. Viola Detector
8.3.3. HOG
8.3.4. Non-Maximal Suppresion (NMS)

8.4. Datasets

8.4.1. Pascal VC
8.4.2. MS Coco
8.4.3. ImageNet (2014)
8.4.4. MOTA Challenge

8.5. Two Shot Object Detector

8.5.1. R-CNN
8.5.2. Fast R-CNN
8.5.3. Faster R-CNN
8.5.4. Mask R-CNN

8.6. Single Shot Object Detector

8.6.1. SSD
8.6.2. YOLO
8.6.3. RetinaNet
8.6.4. CenterNet
8.6.5. EfficientDet

8.7. Backbones

8.7.1. VGG
8.7.2. ResNet
8.7.3. Mobilenet
8.7.4. Shufflenet
8.7.5. Darknet

8.8. Object Tracking

8.8.1. Classical Approaches
8.8.2. Particulate Filters
8.8.3. Kalman
8.8.4. Sort Tracker
8.8.5. Deep Sort

8.9. Deployment

8.9.1. Computing Platform
8.9.2. Choice of Backbone
8.9.3. Choice of Framework
8.9.4. Model Optimization
8.9.5. Model Versioning

8.10. Study: People Detection and Tracking

8.10.1. Detection of People
8.10.2. Monitoring of People
8.10.3. Re-Identification
8.10.4. Counting People in Crowds

Module 9. Image Segmentation with Deep Learning

9.1. Object Detection and Segmentation

9.1.1. Semantic Segmentation

9.1.1.1. Semantic Segmentation Use Cases

9.1.2. Instantiated Segmentation

9.1.2.1. Instantiated Segmentation Use Cases

9.2. Evaluation Metrics

9.2.1. Similarities with Other Methods
9.2.2. Pixel Accuracy
9.2.3. Dice Coefficient (F1 Score)

9.3. Cost Functions

9.3.1. Dice Loss
9.3.2. Focal Loss
9.3.3. Tversky Loss
9.3.4. Other Functions

9.4. Traditional Segmentation Methods

9.4.1. Threshold Application with Otsu and Riddlen
9.4.2. Self-organizing maps
9.4.3. GMM-EM Algorithm

9.5. Semantic Segmentation Applying Deep Learning: FCN

9.5.1. FCN
9.5.2. Architecture
9.5.3. FCN Applications

9.6. Semantic Segmentation Applying Deep Learning: U-NET

9.6.1. U-NET
9.6.2. Architecture
9.6.3. U-NET Application

9.7. Semantic Segmentation Applying Deep Learning: Deep Lab

9.7.1. Deep Lab
9.7.2. Architecture
9.7.3. Deep Lab Application

9.8. Instantiated Segmentation Applying Deep Learning: RCNN Mask

9.8.1. RCNN Mask
9.8.2. Architecture
9.8.3. Application of a RCNN Mask

9.9. Video Segmentation

9.9.1. STFCN
9.9.2. Semantic Video CNNs
9.9.3. Clockwork Convnets
9.9.4. Low-Latency

9.10. Point Cloud Segmentation

9.10.1. The Point Cloud
9.10.2. PointNet
9.10.3. A-CNN

Module 10. Advanced Image Segmentation and Advanced Computer Vision Techniques

10.1. Database for General Segmentation Problems

10.1.1. Pascal Context
10.1.2. CelebAMask-HQ
10.1.3. Cityscapes Dataset
10.1.4. CCP Dataset

10.2. Semantic Segmentation in Medicine

10.2.1. Semantic Segmentation in Medicine
10.2.2. Datasets for Medical Problems
10.2.3. Practical Applications

10.3. Annotation Tools

10.3.1. Computer Vision Annotation Tool
10.3.2. LabelMe
10.3.3. Other Tools

10.4. Segmentation Tools Using Different Frameworks

10.4.1. Keras
10.4.2. Tensorflow v2
10.4.3. Pytorch
10.4.4. Others

10.5. Semantic Segmentation Project. The Data, Phase 1

10.5.1. Problem Analysis
10.5.2. Input Source for Data
10.5.3. Data Analysis
10.5.4. Data Preparation

10.6. Semantic Segmentation Project. Training, Phase 2

10.6.1. Algorithm Selection
10.6.2. Education
10.6.3. Assessment

10.7. Semantic Segmentation Project. Results, Phase 3

10.7.1. Fine Tuning
10.7.2. Presentation of The Solution
10.7.3. Conclusions

10.8. Autoencoders

10.8.1. Autoencoders
10.8.2. Autoencoder Architecture
10.8.3. Noise Elimination Autoencoders
10.8.4. Automatic Coloring Autoencoder

10.9. Generative Adversarial Networks (GANs)

10.9.1. Generative Adversarial Networks (GANs)
10.9.2. DCGAN Architecture
10.9.3. Conditional GAN Architecture

10.10. Enhanced Generative Adversarial Networks

10.10.1. Overview of the Problem
10.10.2. WGAN
10.10.3. LSGAN
10.10.4. ACGAN

Module 11. The 3D Industry

11.1. 3D Industry in Animation and Video Games

11.1.1. 3D Animation
11.1.2. 3D Industry in Animation and Video Games
11.1.3. 3D Animation Future

11.2. 3D in Video Games

11.2.1. Video Games. Limitations
11.2.2. 3D Video Game Development. Difficulties
11.2.3. Solutions to Video Game Development Difficulties

11.3. 3D Software for Video Games

11.3.1. Maya. Pros and Cons
11.3.2. 3Ds Max. Pros and Cons
11.3.3. Blender. Pros and Cons

11.4. Pipeline in 3D Asset Generation for Video Games

11.4.1. Idea and Assembly from a Modelsheet
11.4.2. Modeling with Low Geometry and High Detailing
11.4.3. Projection of Textured Details

11.5. Key Artistic 3D Styles for Video Games

11.5.1. Cartoon Style
11.5.2. Realistic Style
11.5.3. Cel Shading
11.5.4. Motion Capture

11.6. 3D Integration

11.6.1. 2D Digital World Integration
11.6.2. 3D Digital World Integration
11.6.3. Real-World Integration (AR, MR/XR)

11.7. Key 3D Factors for Different Industries

11.7.1. 3D in Film and Series
11.7.2. 3D in Video Games
11.7.3. 3D in Marketing

11.8. Render: Real-time rendering and pre-rendering

11.8.1. Lighting
11.8.2. Shadow Definition
11.8.3. Quality vs. Speed

11.9. 3D Asset Generation in 3D Max

11.9.1. 3D Max Software
11.9.2. Interface, Menus, Toolbars
11.9.3. Controls
11.9.4. Scene
11.9.5. Viewports
11.9.6. Basic Shapes
11.9.7. Object Generation, Modification and Transformation
11.9.8. 3D Scene Creation
11.9.9. 3D Professional Asset Modeling for Video Games
11.9.10. Material Editors

11.9.10.1. Creating and Editing Materials
11.9.10.2. Applying Light to Materials
11.9.10.3. UVW Map Modifier. Mapping Coordinates
11.9.10.4. Texture Creation

11.10. Workspace Organization and Best Practices

11.10.1. Creation of a Project
11.10.2. Folder Structure
11.10.3. Custom Functionality

Module 12. Art and 3D in the Video Game Industry

12.1. 3D VR Projects

12.1.1. 3D Mesh Creation Software
12.1.2. Image Editing Software
12.1.3. Virtual Reality

12.2. Typical Problems, Solutions and Project Needs

12.2.1. Project Needs
12.2.2. Possible Problems
12.2.3. Solutions

12.3. Aesthetic Line Study for the Artistic Style Generation in Video Games: From Game Design to 3D Art Generation

12.3.1. Choice of Video Game Recipient. Who We Want to Reach
12.3.2. Developer’s Artistic Possibilities
12.3.3. Final Definition of the Aesthetic Line

12.4. Aesthetic Benchmarking and Competitor Analysis

12.4.1. Pinterest and Similar Sites
12.4.2. Modelsheet Creation
12.4.3. Competitor Search

12.5. Bible Creation and Briefing

12.5.1. Bible Creation
12.5.2. Bible Development
12.5.3. Briefing Development

12.6. Scenarios and Assets

12.6.1. Production Asset Planning at Production Levels
12.6.2. Scenario Design
12.6.3. Asset Design

12.7. Asset Integration in Levels and Tests

12.7.1. Integration Process at All Levels
12.7.2. Texture
12.7.3. Final Touches

12.8. Characters

12.8.1. Character Production Planning
12.8.2. Character Design
12.8.3. Character Asset Design

12.9. Character Integration in Scenarios and Tests

12.9.1. Character Integration Process in Levels
12.9.2. Project Needs
12.9.3. Animations

12.10. 3D Video Game Audio

12.10.1. Project Dossier Interpretation for Sound Identity Generation of Video Games
12.10.2. Composition and Production Processes
12.10.3. Soundtrack Design
12.10.4. Sound Effect Design
12.10.5. Voice Design

Module 13. Advanced 3D

13.1. Advanced 3D Modeling Techniques

13.1.1. Interface Configuration
13.1.2. Modeling Observation
13.1.3. Modeling in High
13.1.4. Organic Modeling for Videogames
13.1.5. Advanced 3D Object Mapping

13.2. Advanced 3D Texturing

13.2.1. Substance Painter Interfaces
13.2.2. Materials, Alphas and Brush Use
13.2.3. Particle Use

13.3. 3D Software and Unreal Engine Export

13.3.1. Unreal Engine Integration in Designs
13.3.2. 3D Model Integration
13.3.3. Unreal Engine Texture Application

13.4. Digital Sculpting

13.4.1. Digital Sculpting with ZBrush
13.4.2. First Steps in ZBrush
13.4.3. Interface, Menus and Navigation
13.4.4. Reference Images
13.4.5. Full 3D Modeling of Objects in ZBrush
13.4.6. Base Mesh Use
13.4.7. Part Modeling
13.4.8. 3D Model Export in ZBrush

13.5. Polypaint Use

13.5.1. Advanced Brushes
13.5.2. Texture
13.5.3. Default Materials

13.6. Rheopology

13.6.1. Rheopology. Use in the Video Game Industry
13.6.2. Low-Poly Mesh Creation
13.6.3. Software Use for Rhetopology

13.7. 3D Model Positions

13.7.1. Reference Image Viewers
13.7.2. Transpose Use
13.7.3. Transpose Use for Models Composed of Different Pieces

13.8. 3D Model Export

13.8.1. 3D Model Export
13.8.2. Texture Generation for Exportation
13.8.3. 3D Model Configuration with the Different Materials and Textures
13.8.4. Preview of the 3D Model

13.9. Advanced Working Techniques

13.9.1. 3D Modeling Workflow
13.9.2. 3D Modeling Work Process Organization
13.9.3. Production Effort Estimates

13.10. Model Finalization and Export for Other Programs

13.10.1. Workflow for Model Finalization
13.10.2. Zpluging Exportation
13.10.3. Possible Files. Advantages and Disadvantages

Module 14. 3D Animation

14.1. Software Operation

14.1.1. Information Management and Work Methodology
14.1.2. Animation
14.1.3. Timing and Weight
14.1.4.  Animation With Basic Objects
14.1.5. Direct and Inverse Cinematics
14.1.6. Inverse Kinematics
14.1.7. Kinematic Chain

14.2. Anatomy. Biped Vs. Quadruped

14.2.1. Biped
14.2.2. Quadruped
14.2.3. Walking Cycle
14.2.4. Running Cycle

14.3. Facial Rig and Morpher

14.3.1. Facial Language. Lip-Sync, Eyes and Focal Points
14.3.2. Sequence Editing
14.3.3. Phonetics. Importance

14.4. Applied Animation

14.4.1. 3D Animation for Film and Television
14.4.2. Animation for Video Games
14.4.3. Animation for Other Applications

14.5. Motion Capture with Kinect

14.5.1. Motion Capture for Animation
14.5.2. Sequence of Movements
14.5.3. Blender Integration

14.6. Skeleton, Skinning and Setup

14.6.1. Interaction Between Skeleton and Geometry
14.6.2. Mesh Interpolation
14.6.3. Animation Weights

14.7. Acting

14.7.1. Body Language
14.7.2. Poses
14.7.3. Sequence Editing

14.8. Cameras and Plans

14.8.1. The Camera and the Environment
14.8.2. Composition of the Shot and the Characters
14.8.3. Finishes

14.9. Visual Special Effects

14.9.1. Visual Effects and Animation
14.9.2. Types of Optical Effects
14.9.3. 3D VFX L

14.10. The Animator as an Actor

14.10.1. Expressions
14.10.2. Actors’ References
14.10.3. From Camera to Program

Module 15. Unity 3D and Artificial Intelligence Proficiency

15.1. Video Games. 3D Unity

15.1.1. Video Games
15.1.2. Video Games. Errors and Hits
15.1.3. Video Game Applications in Other Areas and Industries

15.2. Video Game Development. 3D Unity

15.2.1. Production Plan and Development Phases
15.2.2. Development Methodology
15.2.3. Patches and Additional Content

15.3. 3D Unity

15.3.1. Unity 3D. Applications
15.3.2. Scripting in Unity 3D
15.3.3. Asset Store and Third-Party Plugins

15.4. Physics, Inputs

15.4.1. InputSystem
15.4.2. Physics in Unity 3D
15.4.3. Animation and Animator

15.5. Unity Prototyping

15.5.1. Blocking and Colliders
15.5.2. Pre-Fabs
15.5.3. Scriptable Objects

15.6. Specific Programming Techniques

15.6.1. Singleton Model
15.6.2. Loading of Resources in the Execution of Windows Games
15.6.3. Performance and Profiler

15.7. Video Games for Mobile Devices

15.7.1. Games for Android Devices
15.7.2. Games for IOS Devices
15.7.3. Multiplatform Developments

15.8. Augmented Reality

15.8.1. Types of Augmented Reality Games
15.8.2. ARkit and ARcore
15.8.3. Vuforia Development

15.9. Artificial Intelligence Programming

15.9.1. Artificial Intelligence Algorithms
15.9.2. Finite State Machines
15.9.3. Neural Networks

15.10. Distribution and Marketing

15.10.1. The art of Publishing and Promoting a Video Game
15.10.2. The Responsible for Success
15.10.3. Strategies

Module 16. 2D and 3D Video Game Development

16.1. Raster Graphic Resources

16.1.1. Sprites
16.1.2. Atlas
16.1.3. Texture

16.2. Interface and Menu Development

16.2.1. Unity GUI
16.2.2. Unity UI
16.2.3. UI Toolkit

16.3. Animation System

16.3.1. Animation Curves and Keys
16.3.2. Applied Animation Events
16.3.3. Modifiers

16.4. Materials and Shaders

16.4.1. Material Components
16.4.2. RenderPass Types
16.4.3. Shaders

16.5. Particles

16.5.1. Particle Systems
16.5.2. Transmitters and Sub-Transmitters
16.5.3. Scripting

16.6. Lighting

16.6.1. Lighting Modes
16.6.2. Light Baking
16.6.3. Light Probes

16.7. Mecanim

16.7.1. State Machines, SubState Machines and Transitions between Animations
16.7.2. Blend Trees
16.7.3. Animation Layers and IK

16.8. Cinematic Finish

16.8.1. Timeline
16.8.2. Post-Processing Effects
16.8.3. Universal Render and High-Definition Render Pipeline

16.9. Advanced VFX

16.9.1. VFX Graph
16.9.2. Shader Graph
16.9.3. Pipeline Tools

16.10. Audio Components

16.10.1. Audio Source and Audio Listener
16.10.2. Audio Mixer
16.10.3. Audio Spatializer

Module 17. Programming, Mechanics Generation and Video Game Prototyping Techniques

17.1. Technical Process

17.1.1. Low-Poly and High-Poly Unity Models
17.1.2. Material Settings
17.1.3. High-Definition Render Pipeline

17.2. Character Design

17.2.1. Movement
17.2.2. Collider Design
17.2.3. Creation and Behavior

17.3. Importing Skeletal Meshes into Unity

17.3.1. Exporting Skeletal Meshes from the 3D Software
17.3.2. Skeletal Meshes in Unity
17.3.3. Anchor Points for Accessories

17.4. Importing Animations

17.4.1. Animation Preparation
17.4.2. Importing Animations
17.4.3. Animator and Transitions

17.5. Animation Editor

17.5.1. Creating Blend Spaces
17.5.2. Creating Animation Montage
17.5.3. Editing Read-Only Animations

17.6. Ragdoll Creation and Simulation

17.6.1. Configuration of a Ragdoll
17.6.2. Ragdoll to Animation Graphics
17.6.3. Simulation of a Ragdoll

17.7. Resources for Character Creation

17.7.1. Libraries
17.7.2. Importing and Exporting Library Materials
17.7.3. Handling of Materials

17.8. Work Teams

17.8.1. Hierarchy and Work Roles
17.8.2. Version Control Systems
17.8.3. Conflict Resolution

17.9. Requirements for Successful Development

17.9.1. Production for Success
17.9.2. Optimal Development
17.9.3. Essential Requirements

17.10. Publication Packaging

17.10.1. Player Settings
17.10.2. Build
17.10.3. Installer Creation

Module 18. VR Immersive Game Development

18.1. Uniqueness of VR

18.1.1. Traditional Video Games and VR. Differences
18.1.2. Motion Sickness: Smoothness vs. Effects
18.1.3. Unique VR Interactions

18.2. Interaction

18.2.1. Events
18.2.2. Physical Triggers
18.2.3. Virtual vs. Real World

18.3. Immersive Locomotion

18.3.1. Teletransportation
18.3.2. Arm Swinging
18.3.3. Forward Movement with and without Facing

18.4. VR Physics

18.4.1. Grippable and Throwable Objects
18.4.2. Weight and Mass in VR
18.4.3. Gravity in VR

18.5. UI in VR

18.5.1. Positioning and Curvature of UI Elements
18.5.2. VR Menu Interaction Modes
18.5.3. Best Practices for Comfortable Experiences

18.6. VR Animation

18.6.1. Animated Model Integration in VR
18.6.2. Animated Objects and Characters vs. Physical Objects
18.6.3. Animated vs. Procedural Transitions

18.7. Avatars

18.7.1. Avatar Representation from Your Own Eyes
18.7.2. External Representation of Avatars
18.7.3. Inverse Cinematic and Procedural Avatar Animation

18.8. Audio

18.8.1. Configuring Audio Sources and Audio Listeners for VR
18.8.2. Effects Available for More Immersive Experiences
18.8.3. VR Audio Spatializer

18.9. VR and AR Project Optimization

18.9.1. Occlusion Culling
18.9.2. Static Batching
18.9.3. Quality Settings and Render Pass Types

18.10. Practice: VR Escape Room

18.10.1. Experience Design
18.10.2. Scenario Layout
18.10.3. Mechanic Development

Module 19. Professional Audio for 3d VR Video Games

19.1. Professional 3D Video Games Audio

19.1.1. Video Game Audio
19.1.2. Audio Style Types in Current Video Games
19.1.3. Spatial Audio Models

19.2. Preliminary Material Study

19.2.1. Game Design Documentation Study
19.2.2. Level Design Documentation Study
19.2.3. Complexity and Typology Evaluation to Create Audio Projects

19.3. Sound Reference Studio

19.3.1. Main References List by Similarity with the Project
19.3.2. Auditory References from Other Media to Give Video Games Identity
19.3.3. Reference Study and Drawing of Conclusions

19.4. Sound Identity Design for Video Games

19.4.1. Main Factors Influencing the Project
19.4.2. Relevant Aspects in Audio Composition: Instrumentation, Tempo, etc
19.4.3. Voice Definition

19.5. Soundtrack Creation

19.5.1. Environment and Audio Lists
19.5.2. Definition of Motif, Themes and Instrumentation
19.5.3. Composition and Audio Testing of Functional Prototypes

19.6. Sound Effect Creation (FX)

19.6.1. Sound Effects: FX Types and Complete Lists According to Project Needs
19.6.2. Definition of Motif, Themes and Creation
19.6.3. Sound FX Evaluation and Functional Prototype Testing

19.7. Voice Creation

19.7.1. Voice Types and Phrase Listing
19.7.2. Search and Evaluation of Voice Actors and Actresses
19.7.3. Recording Evaluation and Testing of Voices on Functional Prototypes

19.8. Audio Quality Evaluation

19.8.1. Elaboration of Listening Sessions with the Development Team
19.8.2. All Audio Integration into Working Prototypes
19.8.3. Testing and Evaluation of the Results Obtained

19.9. Project Exporting, Formatting and Importing Audio

19.9.1. Video Game Audio Formats and Compression
19.9.2. Exporting Audio
19.9.3. Importing Project Audio

19.10. Preparing Audio Libraries for Marketing

19.10.1. Versatile Sound Library Design for Video Game Professionals
19.10.2. Audio Selection by Type: Soundtrack, FX and Voices
19.10.3. Commercialization of Audio Asset Libraries

Module 20. Video Game Production and Financing

20.1. Video Game Production

20.1.1. Cascading Methodologies
20.1.2. Case Studies on Lack of Project Management and Work Plan
20.1.3. Consequences of the Lack of a Production Department in the Video Game Industry

20.2. Development Teams

20.2.1. Key Departments in Project Development
20.2.2. Key Profiles in Micromanagement: LEAD and SENIOR
20.2.3. Problems of Lack of Experience in JUNIOR Profiles
20.2.4. Establishment of Training Plan for Low-Experience Profiles

20.3. Agile Methodologies in Video Game Development

20.3.1. SCRUM
20.3.2. AGILE
20.3.3. Hybrid Methodologies

20.4. Effort, Time and Cost Estimates

20.4.1. Video Game Development Costs: Main Concepts and Expenses
20.4.2. Task Scheduling: Critical Points, Keys and Aspects to Consider
20.4.3. Estimates based on VS Stress Points Calculated in Hours

20.5. Prototype Planning Prioritization

20.5.1. Establishment of General Project Objectives
20.5.2. Prioritization of Key Functionalities and Contents: Order and Needs by Department
20.5.3. Grouping of Functionalities and Contents in Production to Constitute Deliverables (Functional Prototypes)

20.6. Best Practices in Video Game Production

20.6.1. Meetings, Dailies, Weekly Meetings, End of Sprint Meetings, and ALPHA, BETA and RELEASE Milestone Review Meetings
20.6.2. Sprint Speed Measurement
20.6.3. Lack of Motivation and Low Productivity Detection and Anticipation of Potential Production Problems

20.7. Production Analysis

20.7.1. Preliminary Analysis I: Market Status Review
20.7.2. Preliminary Analysis 2: Establishment of Main Project References (Direct Competitors)
20.7.3. Previous Analyses Conclusions

20.8. Development Cost Calculation

20.8.1. Human Resources
20.8.2. Technology and Licensing
20.8.3. External Development Expenses

20.9. Investment Search

20.9.1. Types of Investors
20.9.2. Executive Summary
20.9.3. Pitch Deck
20.9.4. Publishers
20.9.5. Self-Financing

20.10. Project Post-Mortem Elaboration

20.10.1. Post-Mortem Elaboration Process in the Company
20.10.2. Positive Aspect Analysis of the Project
20.10.3. Negative Aspect Analysis of the Project
20.10.4. Improvement Proposal on the Project’s Negative Points and Conclusions

You will increase your knowledge through real cases and resolution of complex situations in simulated learning environments”

Advanced Master’s Degree in Virtual Reality and Computer Vision

In a world where technology is advancing by leaps and bounds, Virtual Reality and Computer Vision have become fundamental pillars for various industries, from entertainment to medicine. Understanding and mastering these technologies not only broadens job opportunities, but also positions professionals at the forefront of innovation. In this context, the Advanced Master's Master in Virtual Reality and Computer Vision is an exceptional option for those looking to delve into these disciplines. This Advanced Master's Degree, offered by TECH Global University, is structured to provide comprehensive learning through online classes that allow the flexibility to study from anywhere in the world. The combination of theory and practice ensures that students acquire relevant and applicable skills for real-world situations. Throughout the program, the technical and conceptual foundations of Virtual Reality and Computer Vision will be studied, as well as their applications in sectors such as health, education, marketing and the entertainment industry.

Lead the world of Virtual Reality with this Advanced Master's Degree

Participants will also have the opportunity to work on innovative projects, allowing them to experience first-hand the impact of these technologies on digital transformation. In addition, they will have the support of a highly qualified teaching team, made up of experts in the field, who will guide students throughout their learning process. This interaction will not only enrich their learning experience, but will also foster the development of a strong professional network. Upon completing the Advanced Master's Degree in Virtual Reality and Computer Vision, graduates will be equipped with the knowledge and skills necessary to face the challenges of today's job market, as well as to contribute to innovative projects that transform the way we interact with the digital environment. Take the opportunity to be part of this technological revolution with TECH Global University. Enroll now.