What you are about to see is an interactive method for modeling 3D man-made objects by extracting them from a single flat photograph. This method could be a major breakthrough in graphic design, combining the cognitive ability of humans with the computational accuracy of machines. The technique, which was developed by Tao Chen, Zhe Zhu, Ariel Shamir, Shi-Min Hu, and Daniel Cohen-Or demoed at the Special Interest Group on GRAPHics and Interactive Techniques’ 2013 conference and offers a glimpse into the future of photo manipulation.
Throughout the demonstration video above, users were able to quickly extract 3D objects from regular 2D photos using a 3-sweep method: two strokes defining the object’s profile and another stroke along its main axis. Depending on the complexity of the image, some parts may also need to be individually outlined, but other than this the software is able to consistently extract editable and portable 3D models straight from 2D images using a “patch-match” algorithm.
Some of the objects modeled in the video are considerably complex and impressive. Once an object is outlined, however, you can see the color and texture of the object change a bit from their original state. Though the software package is just in its infancy and is far from perfect, the potentials for the software are limitless. The software is sure to be greatly improved upon in the coming months.