3D Semantic Mesh for Augmented Reality
Kong, Sehyun (2020)
Kong, Sehyun
2020
Degree Programme in Information Technology, MSc (Tech)
Informaatioteknologian ja viestinnän tiedekunta - Faculty of Information Technology and Communication Sciences
This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited.
Hyväksymispäivämäärä
2020-08-26
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:tuni-202006266243
https://urn.fi/URN:NBN:fi:tuni-202006266243
Tiivistelmä
In the augmented reality (AR) or robotics applications, it is important to enhance the perception of robots or users. In many AR applications, 3D spaces of both indoor and outdoor scenes are already available but usually they do not present the environment with the semantic labels.
Therefore, in this project we propose a pipeline to construct the labeled 3D mesh of the actual environment of our office. The first stage is acquiring the point cloud data of the real office. The second stage is the 3D semantic segmentation on the point cloud. A pre-trained convolutional neural networks are used for the 3D semantic segmentation. After the segmentation process, each point in the office is assigned one specific class. Then, the surface reconstruction from the point cloud is the third step in our proposed approach. The last stage is the integration of the result of the semantic segmentation and the mesh of our data. The class of each point is interpolated to the vertices of triangular mesh to generate the annotated 3D mesh. The final result of this work is the mesh of our office, which is annotated by the classes by using the different colors of each class.
This 3D semantic mesh will be able to have an opportunity to be uploaded on the smart phone or portable devices for the further development. Then, users can explore this semantically labeled space by using their mobile devices.
Therefore, in this project we propose a pipeline to construct the labeled 3D mesh of the actual environment of our office. The first stage is acquiring the point cloud data of the real office. The second stage is the 3D semantic segmentation on the point cloud. A pre-trained convolutional neural networks are used for the 3D semantic segmentation. After the segmentation process, each point in the office is assigned one specific class. Then, the surface reconstruction from the point cloud is the third step in our proposed approach. The last stage is the integration of the result of the semantic segmentation and the mesh of our data. The class of each point is interpolated to the vertices of triangular mesh to generate the annotated 3D mesh. The final result of this work is the mesh of our office, which is annotated by the classes by using the different colors of each class.
This 3D semantic mesh will be able to have an opportunity to be uploaded on the smart phone or portable devices for the further development. Then, users can explore this semantically labeled space by using their mobile devices.