We use a two step pipeline to obtain shading point comparisons. Then we ask people to filter shadow boundary candidates we automatically generated using these comparisons.
Regions
• Large-scale datasets fuel research progress • We identify three shading annotation types: • ImageNet, Places, SUN, NYUv2, MINC, ... • Smooth/constant shading • Missing: large-scale dataset of shading • Shadow boundaries annotations • Depth/normal discontinuities • Missing: large-scale benchmark for intrinsic • How to collect shading annotations? images shading component • Pilot study: Ask people to compare shading at predetermined point pairs, similarly to [1]. • Expected output: shading is , = • People are not good at this task • Shading annotations in the Wild (SAW) • Idea 1: Let people pick point pairs • New large-scale dataset of shading • We collect , people still fail on = annotations in real-world images • Generate and filter shadow boundaries • New deep-learning based shading prediction between these point pairs • Smooth/non-smooth shading • Idea 2: Collect regions of constant shading • Benchmark for shading decomposition perfor• Automatically find depth/normal discontinuities mance of intrinsic images • From depth maps of NYUv2 Depth [2]
6. Pixel Labels
4. Annotations
3. Data Collection
Shading comparisons
1. Motivation
Balazs Kovacs, Sean Bell, Noah Snavely, Kavita Bala Cornell University
Shadow boundaries
goo.gl/EFebP9
Shading Annotations in the Wild
(a) Draw regions with constant shading
(b) Select flat/smooth regions that have one material
(c) Select glossy/ transparent regions
(d) Select regions with shading variation
We asked workers to draw polygons around regions of constant shading. Then we pass these regions through three filtering tasks to ensure high quality annotations.
9. Evaluation
8. Shading Prior
Figure credit [3] Original image
• • • • •
Shading layer (Retinex)
Shading layer (Retinex with prior)
• Fine-tune PixelNet [3] to predict smooth/nonsmooth shading for each pixel • Use smooth shading predictions as a • Balance classes with 2 : 1 : 1 ratio • To compare our smooth/non-smooth predictions to prior in Retinex existing methods (which predict a full shading layer): • Promising initial results • Threshold the gradient of shading • More research is needed to seamlessly • Compare the resulting 2-class labels incorporate prior Final pixel labels from mturk annotations and • Our method achieves competitive results depth/normal discontinuities • Future: Green: smooth shading (mturk) [1] Sean Bell, Kavita Bala, Noah Snavely. “Intrinsic Images • New shading benchmark for intrinsic images that Cyan: shadow boundary (semi-automatic) in the Wild”, SIGGRAPH 2014. [2] N. Silberman, D. Hoiem, P. Kohli, and R. Fergus. Indoor combines reflectance and shading Red: depth/normal discontinuity (automatic) segmentation and support inference from rgbd images. • Improved fully convolutional training Use two classes for training: ECCV 2012. • Smooth shading: green [3] A. Bansal, B. Russell, and A. Gupta. Marr Revisited: 2D- This work was supported by the National Science Foundation (grants 3D model alignment via surface normal prediction. CVPR IIS-1617861, IIS-1011919, IIS-1161645, IIS-1149393), and by a Google Faculty Smooth shading heatmaps • Non-smooth shading: cyan + red 2016.