Holograms, air vortexes and the future of VR
Why wear googles when you can have a holograph? Development in the capture, display and workflow of light fields to create VR experiences you can walk around in are gathering pace. Inavate rounds-up developments in this cutting-edge field
Just as we refer to old movies as being ‘silent’ or ‘black-and-white’ will we soon be calling today’s images ‘flat’? There are those who believe that true 3D or holographic video is on the threshold of a major breakthrough that will transform the future of media and communication.
“Our goal is to achieve what science fiction has promised us, and science has yet to deliver,” says Jon Karafin, CEO, Light Field Lab. “We are on the verge of making this happen and not just for entertainment – it will be ubiquitous.”
After all, why should digital creations remain trapped on 2D screens when everything we see, do and touch in the real world is three-dimensional.
The race is on to develop the hologram and for many the next stepping stone are light fields.
As more and more people pile into the field it’s important to nail down what we mean by light field and hologram/holographic since the terms are being abused by marketers.
A real hologram projects an accurate visual representation of an object in a 3D space. Pepper’s Ghost, autostereoscopic glasses-free 3D, LEDs mounted to spinning fan blades, AR or VR headsets all fall short of that definition.
We see objects in our environment thanks to all the rays of light that are bouncing all over the place within our field of view — something also known as a light field.
If you can capture light fields it should be possible to display it, as a hologram. Karafin defines a hologram “as a projection of the encoding of the light field”.
The key is photorealistic 3D object rendering from every possible angle, including things like accurate light reflections. Combine that with positional tracking and you starting to build the fabled holodeck of Star Trek lore.
As Google’s senior VR researcher Paul Debevec explains, “With light fields, nearby objects seem near to you—as you move your head, they appear to shift a lot. Far-away objects shift less and light reflects off objects differently, so you get a strong cue that you’re in a 3D space.”
Diving a little deeper, in real environments, head movement not only creates different perspectives of the scene, but light reflects off these surfaces in different ways as you move your head. Textures can change and diffuse and specular reflections are altered too. These provide very subtle but extremely effective cues to tell the brain that ‘this is real’.
Light field data is designed to capture or model images that simulate the real world.
The likely first use cases for the technology will be for location-based entertainment and education, in theatres, museums or theme park attractions. Chris Chinnock, president of analyst Insight Media, also identifies use of light fields to fuel VR, AR and Mixed or Extended Reality applications viewable on smartphones or smart glasses like HoloLens or Magic Leap.
Ryan Damm, co-founder at holographic imaging software developer Visby believes that light fields will follow VR's overall path, applied first to entertainment like gaming and cinematic applications before moving into corporate verticals from enterprise to education and design.
Other use cases include scientific and medical data 3D visualisation (image guided surgery and diagnostics), air control with 3D visualisation of air traffic; command and control systems of complex 3D environments as well as heads up display (HUD), gaming and augmented reality headsets.
Consumer electronics makers like Samsung or Sony may even be eyeing holographic displays as the next big development. Having reached 8K, beyond which it’s agreed that any further pixel increase is pointless since it can’t be seen by the human eye, vendors are already concentrated on paper-thin and foldable OLEDs. Holographs are a natural extension. But let’s not get ahead of ourselves.
Capturing light fields means ideally recording all the light traveling in all directions in a given volume of space.
This is variously termed plenoptic, in reference to the intensity or chromacity of the light observed from every position, or volumetric in relation to the ability to capture depth information.
Either way, what is important for light field is not that the resulting images are stitched into a 360-degree scene but are post-processed to achieve effects not possible with conventionally recording.
A light field master represents an enormous amount of 3D volumetric data, describing almost every aspect of the scene allowing technical aspects that are traditionally set by camera hardware such as frame rate, shutter angle and depth of field to be decided and rendered computationally in post.
An early leader in this space was Lytro which developed a consumer stills camera using a single microlens (comprised of dozens of individual lenses channelling light from a scene), a VR camera (Immerge) and a Cinema Camera for video.
The company couldn’t however make the technology stick as a commercial proposition and it folded last March.
“Lytro’s collapse has people wondering about the potential of light field capture,” warns Chinnock.
The heavy lifting for LF capture development has fallen to Google, which acquired some of the assets and engineers from Lytro. It has been testing a system consisting of a modified GoPro Odyssey Jump VR camera, bending it into a vertical arc of 16 cameras mounted on a rotating platform.
This arc is then swept around a central axis, which takes about a minute, to acquire about 1000 images. This creates about a 2-foot diameter (60cms) volume of space of light field data. Google is also experimenting with a pair of DSLR cameras that spin in a spiral pattern.
“The latter is good for capturing static scenes and creates a navigable volume that is less than a meter all around,” according to Chinnock.
“Nevertheless, Google have used the rigs to capture some very compelling scenes of the space shuttle flight deck. It is clearly some of the best light field content available and it is now available on the Steam VR site along with their SDK.”
Google also have a number of tricks to help improve the images inside the modest resolution Vive headset.
The LF data set is rendered in realtime to create the stereo images needed for the headset at 90 fps. In development now are methods to extend the concept to capture video.
Development is not confined to the large pockets at Alphabet.
German start up K-Lens has developed a lens that it claims can give any standard DLSR camera the attributes of light field capture.
Its patented technology is a tunnel of internal mirrors that, like a kaleidoscope, produces different perspectives of the same scene, which are then simultaneously projected onto a single camera sensor.
Its still only prototype and also for still images not video, allowing a photographer to adjust things like focus or blur in post-processing, but there’s a commercial launch pencilled in for this year.
The company, which emerged out of the Max Planck Institute for Informatics and Saarland University, is also researching a commercial light field camera targeting the professional film industry.
Volumetric capture is gaining traction in Hollywood as a means to record huge amounts of data for use in performance capture films like the Avatar sequels, or visual effects with the bullet time sequence from The Matrix still the most iconic example.
The technique videos a scene from arrays of simultaneously shooting cameras, effectively scanning actors and objects from all sides into data. Microsoft’s Holographic Capture studio has over 150 cameras directed to an 8-foot, square platform surrounded by a green screen.
London’s Hammerhead VR has a similar set-up, licenced by Microsoft, using 106 cameras; New Zealand’s 8i has created holograms of ex-astronaut Buzz Aldrin and actor Jon Hamm.
For actors and directors this is more like theatre in the round than conventional filming. There’s no framing or any single camera to look at. Much of the current content is being produced for VR, AR and the cinematic sequences of AAA games.
Intel Studios has the world’s largest volumetric capture stage, a 10,000 square-feet dome, with which Paramount is experimenting, and OTOY which creates market leading photorealistic software renders operates its own LightStage for digital capture of the human face.
From this captured imagery, proprietary algorithms create realistic virtual renditions of the subject “faithfully reproducing the colour, texture, shine, shading, and translucency of the subject’s skin” in any location or setting.
Provided sufficient information about an object or scene is captured as data then it can be encoded and decoded as a holograph.
The math is not considered the hardest part of the equation but how do you display it is.
The stated reason for ex-Lytro executive Jon Karafin’s decision to leave the company (a year before it folded) was because in discussions he was having with Hollywood studios, time and time again the question was not how to capture holograms but what can they be seen on. Solve the display and the capture takes care of itself.
Unlike current head-mounted VR, a true holographic display is one where the viewer won’t need to wear any diverting headgear, cabling or accessories. Ideally, they will have complete freedom of movement and be able to see and focus on an object no matter the angle at which it is viewed.
From a content perspective, there should no longer be a distinction between the real and the synthetic.
The sheer number of pixels needed for high fidelity are staggering as is the intensive GPU requirements. But, several companies are working on these issues and there is progress on a number of fronts.
“The reality with true light field displays today is that they have limited depth, limited field of view (FOV) and limited resolution,” says Chinnock.
“Extending current technologies to get a compelling display will take a while, unless a totally new approach can be developed.”
A case in point is the smartphone display from camera maker RED. Its Hydrogen phone is touted as the “world's first holographic media machine” but its display, made by Menlo Park-company Leia Inc. is considered more autostereoscopic than holographic.
It uses ‘Diffractive Lightfield ‘Backlighting’ that displays different images and textures in different spatial directions to give the appearance of depth.
What is of more significance, perhaps, than this £1000 device is RED’s plan to develop a whole suite of cameras and monitors around the creation of 3D and mixed reality content. Since RED cameras are already used to shoot a wide variety of high-end film and TV content the company is well placed to take Netflix or Disney into the future.
Another start-up, Holochip Corp. is working on single user and multiple-user displays.
With smaller screens, smaller FOVs and a smaller radiance image resolution single user displays are less challenging.
Single user LF displays can use currently available GPU cards with the Leia LF display in the Hydrogen phone, and a LF HMD prototype from Nvidia.
Samuel Robinson, vp engineering at Holochip told Insight Media’s Display Summit that they are working on a helicopter flight simulator where 3D depth perception is critical for landing.
Multi-user applications include themed entertainment, cinema and command tables.
Here, the FOV, screen size and radiance image all need to be much bigger. This requires new processing architectures to deliver data to the LF display.
Latvia’s LightSpace Technologies is developing bench-top display workstations based around what it calls ‘multi-plane time-multiplexed volumetric 3D image technology’, and a fast rear image projection.
The display offers omnidirectional views with all the major 3D depth cues (physical and psychological), including parallax.
These displays are aimed at visualisation of scientific and medical 3D data sets, security-related, tactical and traffic information (civil and military), architectural and environmental 3D designs.
Each workstation integrates a multi GPU computer with the necessary input devices and means of 3D interactivity and connects to external data sources by 10, 40 or 100 Gig ethernet.
The Looking Glass from Looking Glass Factory is a patent-pending combination of light field and volumetric display technologies within a single three-dimensional display.
Forty-five unique simultaneous views of a virtual scene are captured at 60 frames per second and encoded into video signal is sent via HDMI to the display. The Looking Glass optics decode the signal into a colour ‘superstereoscopic’ scene. Its ‘holograms’ are a 5x9 grid of a 3D scene, where each view is a slightly different angle.
You can order an 8.9-in desktop version of the Looking Glass or a 15.6-inch unit for simulation, design and retail display built out of the firm’s Hong Kong lab.
There’s even a Vimeo channel with content created in game engine Unity for viewing back on the display.
“Conventional holograms are more analogous to photographs (static — they don’t move), [while our] technology is more analogous to movies (dynamic and alive),” claims the developer.
“While Looking Glass is technically a light field display with volumetric characteristics, it's the closest we've ever come to putting the holograms we know and love from Star Wars on our desks.”
That claim is hotly disputed at Light Field Lab where Karafin defines a true holographic display as one which “projects and converges rays of light such that a viewer’s eyes can freely focus on generated virtual images as if they were real objects.”
That’s what it is working on – with commercial products launching from 2020. Its prototype display is 4×6 inches and has 16K x19K pixels, which are used to create light rays in many directions (exactly how it does this is not disclosed).
Its modular design means the blocks can be joined to create larger display walls or eventually entire rooms. A series of 18-inch displays combined into a videos walls is said to be capable of projecting holograms tens of feet out. Its target customers are casinos and LBE vendors followed by theatres, sports venues and experiential retail.
The company recently partnered with OTOY to develop a content and technology pipeline to turn the Holodeck into a reality.
“Light Field Lab claims this vision of the Holodeck is just a few years off,” says Chinnock. “I think that is way too optimistic. It’s more like 10 years. Their prototype display currently only produces a few inches of depth.”
There’s one fundamental problem to working with light fields. It’s the massive data payload.
Streaming a light field would require broadband speeds of 500Gbps up to 1TBps – something not likely in the next 50 years.
Being able to work with so much data, let alone transmit it, requires serious compression.
Arguably this is the biggest problem facing LF development with many approaches to solving it being worked on in parallel.
A group at standard body MPEG is drafting a means of enabling the “interchange of content for authoring and rendering rich immersive experiences”.
It goes under the snappily titled ‘Hybrid Natural/Synthetic Scene’ (HNSS).
According to MPEG, HNSS should provide a means to support “scenes that obey the natural flows of light, energy propagation and physical kinematic operations”.
CableLabs, a think tank funded by the cable industry along with OTOY and Light Field Lab are contributing to work MPEG’s work in this area.
The basis appears to be a file format called ORBIX. Originally developed by OTOY as a large ‘container’ to carry all kinds of graphic and special effects data to make it easy to interchange files between facilities.
“Previous raster-based solutions used in motion imaging formats are impractical for representing complex volumetric scenes,” explains Arianne Hinds, principal architect, CableLabs.
“Work is now underway to create a media format specification, optimised for interchanging light field images. This is based on scene graphs which contain information related to the logical, spatial, or temporal representation of visual and audio information.” It’s due to provide an update in early 2019.
JPEG Pleno is another standardisation initiative that addresses interoperability issues among light fields, point clouds and holography.
Zahir Alpaslan, director of display systems at Ostendo who is working on this, told the Display Summit in October that point clouds remain immature and that terapixels of data will be needed to move to true holographic solutions.
Light Field Lab has its own vector-based video format that it says will make it possible to stream holographic content with 300Mbps over “next-generation broadband connections” by which he means 5G connectivity.
MPEG is also developing Video-based Point Cloud Compression (V-PCC) with the goal of enabling avatars or holographs existing as part of an immersive extended reality.
“One application of point clouds is to use it for representing humans, animals or other real-world objects or even complete scenes,” explains Ralf Schaefer, Director Standards at Technicolor Corporate Research.
V-PCC is all about six degrees of freedom (6DoF) - or fully immersive movement in three-dimensional space - and the goal which Hollywood studios believe will finally make virtual and blended reality take off.
Apple has the most extensive augmented reality ecosystem with an AR kit for developers to create experiences on iOS devices. Apple’s technology is reportedly the chief driver behind V-PCC.
The V-PCC specification is planned to be published by ISO late next year so the first products could be in the market by 2020.
The current V-PCC test encoder can compress at 125:1, meaning that a point cloud of 1 million points could be encoded at 8 Mbit/s “with good perceptual quality” according to MPEG.
“Light field workflows are immature to say the least,” says Chinnock.
“There is a lot of work ongoing to try to develop some standards for an interchange format so each application and display is not a unique solution. We need new codecs and better distribution options with higher bandwidth.
He adds, “There is a clear division in opinions as to how to represent next generation images. Once camp sees the evolutionary path of better encoding of the traditional video signal paradigm. The other is more analogous to a gaming pipeline where everything is a 3D model with various levels of fidelity and realism. This debate will continue for some time.”
Volumetric haptics and holographic streaming
While this generation of practical holography has barely scratched the surface, research is already turning to being able to touch virtual objects. Think Minority Report. The problem is how you interface with thin air.
One solution is to actually use air. MIT students have designed an ‘air-vortex’ to provide mid-air haptic feedback when a user touches virtual objects. The system tracks where a person’s hand is and pulses air around their fingers up to a metre’s distance.
Similar technology has been tested by the US military as a means of non-lethal crowd control. It could also be adapted for interface concepts where air pressure would feedback that a ‘button’ has been pressed.
Light Field Lab and others are also hinting at a future in which interacting with holograms become part of our communication.
For holographic displays to become holographic TV - viewing outside purpose-built environments in the living room - one heck of a lot of processing is necessary.
To this, one might look to quantum computing which Google, Alibaba, IBM and Intel are investigating. Instead of the binary 1 and 0 function of classical computers, quantum machines use qubits (quantum bits) which can represent multiple states and are therefore able to store many times more information.
Combine that with banks of quantum systems running in the cloud with transport over 5G and realtime holographic video, might, just might happen.
Intel Labs reckons it will be five to seven years before the industry gets close to tackling the first engineering-scale problems, and it will likely require 1 million or more qubits to achieve commercial relevance.
With current quantum computers such as Alibaba’s capable of a mere 20 qubits you begin to realise the magnitude of the task. We won’t be using a quantum PC on the desktop anytime soon.