The 2nd Workshop on 3D Reconstruction in the Wild (3DRW2019)
in conjunction with ICCV2019
(October 28, 2019)

What's New

  • Aug. 18: Notification released.
  • Aug. 09: Invited speakers page updated.
  • Jul. 29: Invited speakers page updated.
  • Jul. 29: Submission deadline extended. (Aug. 03 [23:59 Pacific Time])
  • Jun. 12: The paper submission site is open.
  • May. 17: This site is opened.

Call for Papers

Research on 3D reconstruction has long focused on recovering 3D information from multi-view images captured in ideal conditions. However, the assumption of ideal acquisition conditions severely limits the deployment possibilities for reconstruction systems, as typically several external factors need to be controlled, intrusive capturing devices have to be used or complex hardware setups need to be operated to acquire image data suitable for 3D reconstruction. In contrast, 3D reconstruction in unconstrained settings (referred to as 3D reconstruction in the wild) usually imposes only limited to no restrictions on the data acquisition procedure and/or on data capturing environments, and therefore, represents a far more challenging task.

The goal of this workshop is to foster the development of 3D reconstruction techniques capable of operating in unconstrained conditions which are robust and/or real-time, and perform well on a variety of environments with different characteristics. Towards this goal, we are interested in all parts of 3D reconstruction techniques ranging from multi-camera calibration, feature extraction, matching, data fusion, depth learning, and meshing techniques to 3D modeling approaches capable of operating on image data captured in the wild. Topics of interest include, but are not limited to:

Topics:

  • geometry for multi-views
  • underwater camera calibration, refractive concerns and lighting/camera configuration
  • features from images under bad weather
  • features from backlit images tracking in the snow
  • distorted image matching
  • structure from super-wide-baseline images
  • structure from remote sensing images
  • structure-from-motion and visual odometry
  • depth from incomplete data
  • 3D from hand-held cameras
  • 3D from images captured by underwater cameras
  • 3D from images captured using drones
  • 3D from unordered image sequences/collections
  • 3D from depth image sequences
  • 3D from data (deep learning approach)
  • fusion for heterogeneous images
  • fusion for unreliable depth maps/sequences
  • mesh generation
  • mesh interpolation for deforming objects
  • reconstruction of thin objects
  • reconstruction in sports
  • reconstruction of planets
  • mapping, localization and SLAM
  • autonomous navigation
  • 3D for agriculture, bio-imaging, and physics
  • phenotyping
  • benchmarking dataset under challenging scenarios