Spectral Ray Tracing for Generation of Spatial Color Constancy Training Data
Yilmaz, Osman (2022)
Yilmaz, Osman
2022
Master's Programme in Computing Sciences
Informaatioteknologian ja viestinnän tiedekunta - Faculty of Information Technology and Communication Sciences
This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited.
Hyväksymispäivämäärä
2022-10-27
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:tuni-202210197695
https://urn.fi/URN:NBN:fi:tuni-202210197695
Tiivistelmä
Computational color constancy is a fundamental step in digital cameras that estimates the chromaticity of illumination. Most of automatic white balance (AWB) algorithms that perform computational color constancy assume that there is a single illuminant in the scene. This widely-known assumption is frequently violated in the real world. It could be argued that the main reason for the assumption of single illuminant comes from the limited amount of available mixed illuminant datasets and the laborious annotation process. Annotation of mixed illuminated images is orders of magnitude more laborious compared to a single illuminant case, due to the spatial complexity that requires pixel-wise ground truth illumination chromaticity in various ratios of existing illuminants.
Spectral ray tracing is a 3D rendering method to create physically realistic images and animations using the spectral representations of materials and light sources rather than a trichromatic representation such as red-green-blue (RGB). In this thesis, this physically correct image signal generation method is used in creation of spatially varying mixed illuminated image dataset with pixel-wise ground truth illumination chromaticity. In complex 3D scenes, materials are defined based on a database of real world spectral reflectance measurements and light sources are defined based on the spectral power distribution definitions that have been released by the International Commission on Illumination (CIE). Rendering is done by using Blender Cycles rendering engine in the visible spectrum wavelengths from 395nm to 705nm with 5nm equal bins resulting in 63 channel full-spectrum image. The resulting full-spectrum images can be turned into the raw response of any camera as long as the spectral sensitivity of the camera module is known. This is a big advantage of spectral ray tracing since color constancy is mostly camera module-dependent. Pixel-wise white balance gain is calculated through the linear average of illuminant chromaticities depending on their contribution to the mixed illuminated raw image. The raw image signal and pixel-wise white balance gain are fundamentally needed in spatial color constancy dataset. This study implements an image generation pipeline that starts from the spectral definitions of illuminants and materials and ends with an sRGB image created from a 3D scene.
6 different 3D Blender scenes are created, each having 7 different virtual cameras located throughout the scene. 406 single illuminated and 1015 spatially varying mixed illuminated images are created including their pixel-wise ground truth illumination chromaticity. Created dataset can be used to improve mixed illumination color constancy algorithms and paves the way for further research and testing in the field.
Spectral ray tracing is a 3D rendering method to create physically realistic images and animations using the spectral representations of materials and light sources rather than a trichromatic representation such as red-green-blue (RGB). In this thesis, this physically correct image signal generation method is used in creation of spatially varying mixed illuminated image dataset with pixel-wise ground truth illumination chromaticity. In complex 3D scenes, materials are defined based on a database of real world spectral reflectance measurements and light sources are defined based on the spectral power distribution definitions that have been released by the International Commission on Illumination (CIE). Rendering is done by using Blender Cycles rendering engine in the visible spectrum wavelengths from 395nm to 705nm with 5nm equal bins resulting in 63 channel full-spectrum image. The resulting full-spectrum images can be turned into the raw response of any camera as long as the spectral sensitivity of the camera module is known. This is a big advantage of spectral ray tracing since color constancy is mostly camera module-dependent. Pixel-wise white balance gain is calculated through the linear average of illuminant chromaticities depending on their contribution to the mixed illuminated raw image. The raw image signal and pixel-wise white balance gain are fundamentally needed in spatial color constancy dataset. This study implements an image generation pipeline that starts from the spectral definitions of illuminants and materials and ends with an sRGB image created from a 3D scene.
6 different 3D Blender scenes are created, each having 7 different virtual cameras located throughout the scene. 406 single illuminated and 1015 spatially varying mixed illuminated images are created including their pixel-wise ground truth illumination chromaticity. Created dataset can be used to improve mixed illumination color constancy algorithms and paves the way for further research and testing in the field.