https://doi.org/10.48742/fau.v4z9-0q44
Description
Hyperspectral imaging aims at sampling the light spectrum for each pixel. To record these hyperspectral data cubes in a high resolution in each dimension, snapshot hyperspectral cameras are necessary. To achieve this, various sensing systems were developed, e.g., camera arrays and color filter arrays. These snapshot hyperspectral cameras always need a reconstruction process to recover high-resolution images since it is impossible to record a 3D data cube with 2D grayscale sensors without exploiting the temporal dimension, which sacrifices snapshot capabilities. Therefore, ground-truth hyperspectral video is necessary to evaluate the performance of these snapshot cameras. For this, a synthetic hyperspectral array video database was developed, which can be processed to simulate diverse hyperspectral snapshot cameras. Hence, each of the seven scenes is rendered from a three times three camera array for 30 frames. The approach to render hyperspectral images is to transform a classical RGB renderer by rendering each wavelength independently as grayscale image. Of course, this database also can be used for many other tasks, where a high resolution for each dimension is necessary, for example, spectral reconstruction, cross-spectral depth estimation, hyperspectral denoising algorithms and hyperspectral video coding. For details, please refer to the HyViD.md file.
Authors
Dataset artifacts
- city.zip (7.7G)
- family_house.zip (8.8G)
- forest.zip (11G)
- framework.zip (258M)
- HyViD.md (4.5K)
- indoor.zip (6.7G)
- lab.zip (5.5G)
- medieval_seaport.zip (7.1G)
- outdoor.zip (12G)