No need for deep learning
Current reconstruction methods rely on finding visual similarities between images to build 3D models. However, because the images are so large, issues such as occlusion and repetition can adversely affect a model’s accuracy.
Traditional 3D modelling techniques rely on identifying key points in an image, matching them in another image and then propagating those matches across a specific area. With HybridFlow, the images are clustered into sections that are perceptually similar and then at the pixel level. For instance, an image segment showing blue sky will be matched with another segment showing the same, just as a cluster showing a densely built up area will be matched with a cluster showing a similar pattern based on pixel-level analysis. This makes the model more robust, as points are easier to track across images and processing time is accelerated to triangulate those points, resulting in an accurate reproduction.
“It also eliminates the need for any deep learning technique, which would require a lot of training and resources,” Poullis remarks. “This is a data-driven method that can handle an arbitrarily large image set.”
He adds that the data is saved on disk, not in memory, which optimizes the data pipeline. With a remote computer doing the processing, he notes, an average-sized model of an urban area can be created in less than 30 minutes.