MPEG aims at standardizing coding solutions for the digital representation of light fields, which contain color and directional light information from any point in space. The objective is to support immersive applications - virtual and augmented reality - with the highest level of visual comfort with parallax cues that are required in natural vision.
Since any digital capture will subsample the light field in which we are immersed, various capture technologies (omnidirectional and plenoptic cameras, camera arrays, etc.) as well as display technologies (head mounted devices, light field displays, integral photography displays, etc.) lead to a wide range of coding technologies to be explored. Some will be selected for further standardization.
MPEG has already standardized 360° video - also called omnidirectional video - in OMAF version 1, and is in the process of standardizing extensions in 3DoF+ OMAF version 2 by mid-2020, bringing natural vision parallax cues, albeit within a limited range of viewer motion. Full parallax of dynamic object is supported in the first version of Point Cloud Coding ready by early 2020, while coding of 6DoF virtual reality over large navigation volumes is further studied for standardization by 2023.
This workshop will cover the MPEG-I immersive activities - past, present and future – calling participants to present demos and future requirements to the MPEG community.
Title: Standard coding technologies for immersive audio-visual experiences
Date: 10 July, 2019
Address: MPEG meeting venue
Clarion Post Hotel
Drottningtorget 10, 411 03 Gothenburg, Sweden
1300-1315 |
(Lu Yu, Zhejiang University) |
1315-1345 |
Usecases and challenges about user immersive experiences (Valerie Allie, InterDigital) |
1345-1415 |
(Marek Domanski, Poznan University of Technology) |
1415-1445 |
(Schuyler Quackenbush, Audio Research Labs) |
1445-1455 |
Brief introduction about demos:
|
1455-1530 |
Demos Coffee break |
1530-1600 |
(Bart Kroon, Philips) |
1600-1630 |
(Marius Preda, Telecom SudParis, CNRS Samovar) |
1630-1700 |
How can we achieve 6DoF video compression? (Joel Jung, Orange) |
1700-1730 |
How can we achieve lenslet video compression? (Xin Jin, Tsinghua University, Mehrdad Teratani, Nagoya University) |
1730-1800 |
Discussion |