PCA based Computation of Illumination-Invariant Space for Road Detection
by
Taeyoung Kim,
Yu-Wing Tai, and
Sung-Eui Yoon
IEEE Winter Conference on Applications of Computer Vision (WACV), 2017
This figure shows an ideal log color ratio plot from the Machbeth color checker. Patches of the same chromaticity are mapped on a dotted line. We compute an illumination-invariant space by identifying the chromaticity projection line, l, and projecting patches onto the solid line l.
Original RGB images (first row), and the illumination-invariant images computed by our PCA-based variance minimization (second row). Our method removes shadows reasonably well in the visual inspection. Furthermore, our PCA approach improves the road detection accuracy over the prior entropy method.
Visual comparison of recent CNN approaches w/ and w/o considering the proposed illumination-invariant (II) images. Detected road pixels are visualized in the green. By considering II images, accuracy of these tested CNN methods has been improved.
Abstract
Illumination changes such as shadows significantly affect the accuracy of various road detection methods, especially for vision-based approaches with an on-board monocular camera. To efficiently consider such illumination changes, we propose a PCA based technique, PCA-II, that finds the minimum projection space from an input RGB image, and then use the space as the illumination-invariant space for road detection. Our PCA based method shows 20 times faster performance on average over the prior entropy based method, even with a higher detection accuracy.
To demonstrate its wide applicability to the road detection problem, we test the invariant space with both bottomup and top-down approaches. For a bottom-up approach, we suggest a simple patch propagation method that utilizes the property of the invariant space, and show its higher accuracy over other state-of-the-art road detection methods running in a bottom-up manner. For a top-down approach, we consider the space as an additional feature to the original RGB to train convolutional neural networks. We were also able to observe robust performance improvement of using the invariant space over the original CNN based methods that do not use the space, only with a minor runtime overhead, e.g., 50 ms per image. These results demonstrate benefits of our PCA-based illuminationinvariant space computation.
Contents
Paper: PDF (9.69MB)
PCA-II Source Code: ZIP file(4.04MB) / Github Page
Dept. of Computer Science
KAIST
373-1 Guseong-dong, Yuseong-gu, Daejeon, 305-701
South Korea
sglabkaist dot gmail dot com