24 x 7 World News

A faster, more accurate 3D modelling tool recreates a landscape’s digital twin down to the pixel level — ScienceDaily

0

Concordia researchers have developed a new technique that can help create high-quality, accurate 3D models of large-scale landscapes — essentially, digital replicas of the real world.

While more work is required before the researchers achieve their goal, they recently outlined their new automated method in the Nature journal Scientific Reports. The framework reconstructs the geometry, structure and appearance of an area using highly detailed images taken by aircraft typically flying higher than 30,000 feet. These large-scale aerial images — usually more than 200 megapixels each — are then processed to produce precise 3D models of cityscapes, landscapes or mixed areas. They can model their appearance right down to the structures’ colours.

The framework, called HybridFlow, was developed by Charalambos Poullis, an associate professor of computer science and software engineering at the Gina Cody School of Engineering and Computer Science, and PhD student Qiao Chen.

“This digital twin can be used in typical applications to navigate and explore different areas, as well as virtual tourism, games, films and so on,” Poullis says. “More importantly, there are very impactful applications that can simulate processes in a secure and digital way. So, it can be used by stakeholders and authorities to simulate ‘what-if’ scenarios in cases of flooding or other natural disasters. This allows us to make informed decisions and evaluate various risk-mitigating factors.”

No need for deep learning

Current reconstruction methods rely on finding visual similarities between images to build 3D models. However, because the images are so large, issues such as occlusion and repetition can adversely affect a model’s accuracy.

Traditional 3D modelling techniques rely on identifying key points in an image, matching them in another image and then propagating those matches across a specific area. With HybridFlow, the images are clustered into sections that are perceptually similar and then at the pixel level. For instance, an image segment showing blue sky will be matched with another segment showing the same, just as a cluster showing a densely built up area will be matched with a cluster showing a similar pattern based on pixel-level analysis. This makes the model more robust, as points are easier to track across images and processing time is accelerated to triangulate those points, resulting in an accurate reproduction.

“It also eliminates the need for any deep learning technique, which would require a lot of training and resources,” Poullis remarks. “This is a data-driven method that can handle an arbitrarily large image set.”

He adds that the data is saved on disk, not in memory, which optimizes the data pipeline. With a remote computer doing the processing, he notes, an average-sized model of an urban area can be created in less than 30 minutes.

Poullis shares that he has already been working with officials in the flood-prone city of Terrebonne, just northeast of Montreal. Together they’re working on modelling their city and simulating floods to help plan and evaluate mitigation measures.

“They know they cannot prevent the flooding, but we can provide them with tools to make informed decisions,” he comments. “We allow them to change the environment by introducing barriers such as sandbags, and then we run simulations to see how the floodwater flow is affected.”

This project received support from the Natural Sciences and Engineering Research Council of Canada (NSERC) and a grant from the Department of National Defence.

Leave a Reply