The use of Real-Time 3D labeled cinematics to generate computer vision datasets
I'm looking for students to create cinematics using Unreal Engine, Unity 3D, or Houdini to generate computer vision datasets.
A critical issue in deep learning for computer vision is access to high-quality data. Real-time 3D has the potential to create an unlimited amount of photorealistic data and Amazon has agreed to support the storage of any data created from this project (even if that is Petabytes of data).
The idea is to create a computer vision dataset using this approach for publication in the 53rd International Simulation and Gaming Association (ISAGA) Conference at Northeastern and establish this approach as a proof of concept for creating high-quality computer vision datasets.
Create cinematics with photorealistic shading
Create the same cinematics with flat shading
Validate that the image and video data is useful for computer vision tasks like occlusion and semantic segmentation
To see the current level of photorealism in-game engines see
Rebirth: Introducing photorealism in UE4 https://www.youtube.com/watch?v=9fC20NWhx4s