Remember the legendary Star Wars scene featuring a hologram of Princess Leia desperately calling for the help of Luke Skywalker and his comrades? That was holography at its best, even though the film was released in the 1970s! Though we still have a long way to go before we get to these types of applications, important steps are being taken today to make this vision a reality. We talked to professor Peter Schelkens of ETRO, an imec research group at the VUB, who has been contributing significantly to the search for the holy grail of holography as part of his recently completed ERC Consolidator Grant.
Holography: a brief state of affairs
The development of (Star Wars-like) holographic tabletop displays is undoubtedly the holy grail for anyone who is investigating the development of holographic solutions.
For one, they would offset some of the inherent limitations of the stereoscopic displays that are being used today. One issue that holographic displays would help overcome is the so-called vergence-accommodation conflict – i.e. the eye lens focusing on a given depth (accommodation), while the eyes rotate to focus on a specific point (vergence).
In stereoscopic approaches (due to the displays’ physical limitations), the vergence-accommodation conflict makes for an uncomfortable viewing experience. But holographic displays solve this issue, supporting a high depth resolution by accommodating a high number of viewing angles (i.e. the number and range of viewing angles for each point in a given scene), which creates a vivid and lifelike viewing experience.
However, a major obstacle to realizing such screens is the sheer pixel size of the spatial light modulators (SLMs) that make up the displays: the larger the maximum viewing angle that must be supported, the smaller the SLM pixels must be.
In other words: to develop a holographic tabletop display with a 180° viewing angle, very small pixels of 200 nm (half the shortest wavelength for visible light) are required. Building such a screen using traditional methods would result in a huge number of pixels (order of magnitude ~ 10 terapixel). This, for now, remains a major stumbling block.
Hence today, we still need to project holograms using head-mounted displays, which accommodate a limited viewing angle and avoid requiring the holographic projection to be radiated over 180°. Since the pupil of the eye can only move a few millimeters behind the glasses, only a narrow viewing angle range is necessary. For such applications, existing SLMs with a pixel size of 4 to 10 µm are more than enough. Several labs have started developing such head-mounted display prototypes, including 4K spatial light modulators; prototypes that will primarily find their way into new generations of augmented reality applications – with hyper-realistic overlays and perfect depth reproduction.
But there are additional challenges beyond the development of holographic displays. The actual generation and transmission of holograms, and the enormous amount of computing power and electricity required also pose major challenges if processed using naive methods.
Hence, one of the primary questions we should answer before holography can really take off relates to the efficient generation, coding and transmission of holographic data. To do so, new signal processing technology is required; technology that has been explored as part of a five-year European ERC Consolidator Grant.
Focus of the INTERFERE project: generating, compressing and objectively assessing holograms
It is important to understand that a hologram is the outcome of a wave-based light propagation process using a coherent light source that results in an interference pattern, which can be represented as a complex-numbered image matrix.
The value of every pixel will be the result of the summation of light emitted by every point of the recorded 3D scene within the line-of-sight of the pixel. Hence, the statistics and signal characteristics of holograms are completely different from those of regular images where every pixel is the result of a ray of light arriving at the pixel (as opposed to a complex light wave front). As a result, image processing algorithms designed for classical image content will fail when operating on this data. Moreover, as small pixel pitches are required to support large diffraction angles (and, hence, large view angles), huge data sets are dealt with – which puts additional pressure on computation, storage and transmission resources.
The focus of the ERC 'INTERFERE' project was mainly on:
- Efficiently calculating and representing holograms. Since a hologram is influenced by all elements of a 3D scene, support for highly realistic rendering and extremely high resolution is needed, as well as efficient techniques to generate holograms and enabling representation models.
- Compressing holograms. Classical compression techniques remove visually less important information to reduce the data size. This includes filtering out higher spatial frequencies, which the human eye is less sensitive to. In the case of holographic images, however, the higher spatial frequencies also convey information for the larger viewing angles. Hence, compression techniques need to be designed to accurately tune into holographic signal properties.
- Assessing the quality of a hologram. To observe the quality of produced and decoded holograms, quality assessment procedures need to be defined. So far, little attention has been paid to this problem.
Calculating holograms over 2,500 times faster
One approach to generating holograms uses point clouds, which can be created by scanning a natural scene with a LIDAR-scanner, for example. The simplest way to calculate a hologram based on such a point cloud is to calculate the light propagation for each point separately – up to the hologram plane. However, this is a very calculation intensive exercise.
INTERFERE’s method is much faster. It divides a point cloud into different parts and calculates – for each sub-plane – local holograms, based on the points that are located within a certain distance of that hologram plane. The resulting local hologram is directly projected onto the next plane, together with the other points in its vicinity, and so on. Since the sub-planes are very close to each other, light is only propagating over very small distances which leads to diffraction patches that are much smaller than a complete plane. Moreover, the proposed method accounts for occlusions: points that are hidden behind other points are automatically removed.
The result: quality-wise, holograms are equivalent to the original approach, but the INTERFERE method proves 2,500 times faster. And in the meantime, this technique has been further improved and refined, by also looking at the frequency domain (using only those frequency components that are dominant to build a given hologram). And that method is another 30 times faster!
Developing a new generation of image compression techniques
Image compression is a complex mechanism typically performed in a number of steps. This ERC project focused on the so-called 'transform' which converts the holographic data into a transform domain that decomposes the hologram in its principle building blocks according to the representation issued.
A first technique that was refined, uses a modified JPEG~2000 wavelet transform with a finer splitting of the higher frequency bands, and which more flexibly directs the transformation direction into the dominant frequency orientations (instead of horizontally and vertically only). This leads to much greater gains (either better compression or higher quality – depending on what you want). For off-axis microscopy images, for instance, this technique achieves quality improvements of 11 to 12 dB – which is a lot.
Another transform that has specifically been developed in the framework of INTERFERE impacts the phase information of a hologram. Most SLMs only modulate the phase info, and not the amplitude of the light wave. The problem is that calculations using the phase info of a light wave (ranging between - and + the number pi) introduce a lot of errors. INTERFERE’s novel transformation method caters to this. For example, impressive compression gains of up to 1.3 bit per pixel can be demonstrated for 8-bit images. On top, this method proves to be faster than classical approaches.
Finally, dynamic holograms were investigated as well – where one should be able to compensate for movement. A novel technique the INTERFERE researchers developed allows to describe all possible translation movements (forward, backward, left, right, up and down) and all possible rotations. Result: once its movement is known, a hologram can be manipulated in such a way that the next image can precisely be predicted. The results achieved in this area represent a spectacular leap forward compared to traditional video codecs and have already been integrated in a full holographic video codec system.
Assessing the quality of holograms
Comparing holograms and assessing their quality is far from obvious. The lack of adequate metrics has been an important gap – and an inhibiting factor – in holographic research. That is why this aspect has been addressed as part of the INTERFERE project as well.
Quality assessments are either performed using objective metrics (i.e. measuring various properties of the hologram) or using subjective quality procedures that involve human test subjects scoring the perceived quality after rendering the holograms.
The latter is labor-intensive, since it requires a sufficiently high number of test subjects to be statistically relevant. Up to today, it is the most trustful method to obtain a neutral quality score. The INTERFERE team designed new test procedures and compared quality scores on a holographic display with those of numerical reconstructed holograms rendered on light field and regular displays.
Since subjective methods are labor-intensive, fast automated measurements by objective metrics are often preferred. The performance of these metrics has been verified and validated via the above subjective experiments. In addition, new metrics have been designed.
Currently, the JPEG committee is planning the launch of a holographic coding standard within the JPEG~Pleno standard framework.
Assessing the quality of holograms
Want to know more?
- The ‘INTERFERE’ website
- The ‘JPEG Pleno’ website
- imec.magazine article ‘On Martians and telepresence’
- imec.magazine article ‘JPEG - from mediaeval paintings to holography’
- D. Blinder, A. Ahar, S. Bettens, T. Birnbaum, A. Symeonidou, H. Ottevaere, C. Schretter, and P. Schelkens, “Signal processing challenges for digital holographic video display systems,” Signal Processing: Image Communication, vol. 70, pp. 114-130, 2019: https://www.sciencedirect.com/science/article/pii/S0923596518304855 (Impact Factor 2.814, Citations: 27) Most downloaded article of this journal since February 2019.
Peter Schelkens is a professor at the Department of Electronics and Informatics (ETRO), an imec research group at Vrije Universiteit Brussel (VUB, Belgium). His research interests are in the field of multidimensional signal processing with a strong focus on cross-disciplinary research. In 2014, Peter received an EU ERC Consolidator Grant for research in the domain of digital holography. Peter Schelkens is currently chairing the Coding, Test & Quality subgroup of the ISO/IEC JTC1/SC29/WG1 (JPEG) standardization committee. He is also involved in the coordination of the JPEG Pleno standardization activity targeted at light field, holography and point cloud technologies. Peter Schelkens holds an electrical engineering degree (MSc) in applied physics, a biomedical engineering degree (medical physics), and a PhD in applied sciences from the VUB.
Published on:
6 May 2020