Synthesizing 3D VR Sketch Using Generative Adversarial Neural Network

Wanwan Li, University of South Florida, United States, wanwan@usf.edu

Virtual Reality (VR) has gained significant attention in recent years as an immersive technology that enables users to interact with computer-generated environments. One essential component of VR experiences is the availability of 3D content, which can be time-consuming and labor-intensive to create. In this paper, we propose a novel approach for synthesizing 3D VR sketches using Generative Adversarial Neural Networks (GANs). By leveraging the power of GANs, our method allows for the automatic generation of high-quality creative 3D VR sketches, thereby reducing the burden on content creators and exploring the possibilities of VR content creation using Artificial Intelligence (AI) technologies.

CCS Concepts:Computing methodologies → Graphics systems and interfaces; • Computing methodologies → Virtual reality;

Keywords: VR 3D Sketch, Generative Adversarial Neural Networks (GANs)

ACM Reference Format:
Wanwan Li. 2023. Synthesizing 3D VR Sketch Using Generative Adversarial Neural Network. In 2023 7th International Conference on Big Data and Internet of Things (BDIOT 2023), August 11--13, 2023, Beijing, China. ACM, New York, NY, USA 7 Pages. https://doi.org/10.1145/3617695.3617723

Figure 1
Figure 1: This figure shows the flowchart of our proposed framework for synthesizing 3D VR sketches using Generative Adversarial Neural Networks (GANs), After connecting with the Unity Steam VR plugin, the user wearing Oculus Quest 2 can draw 3D VR sketches, after converting 3D sketches into 2D images using depth color shader rendering, the 2D depth sketch will be utilized to train a RaLSGAN as real data. After training RaLSGAN, fake 2D depth sketch images can be randomly generated by sampling latent vector space. In the end, fake 3D VR sketches are reconstructed from fake 2D depth sketch images.
Figure 2
Figure 2: Overview of our approach.

1 INTRODUCTION

Virtual Reality (VR) has revolutionized the way we interact with digital content, offering immersive experiences that enhance user engagement and understanding. One essential aspect of VR is 3D content generation, which enables users to create and manipulate objects within virtual environments with natural and intuitive gestures. While there have been significant advancements in 3D sketching interfaces, generating realistic and detailed 3D VR sketches remains a challenging task. This paper proposes a novel approach to synthesize 3D VR sketches using Generative Adversarial Neural Networks (GANs) to bridge the gap between freehand drawing 3D VR sketches and automated 3D VR sketches. Previous research related to sketch modeling and generation has primarily focused on engineering sketch [33], 3D shape retrieval [21], semantic scene completion [6], sketched scene composition [4], 3D wireframe shapes [29], hand-based physical proxy [15], 3D sketch-based wire art design  [23], 3D sketching with air scaffolding [18], sketch generation from 3D scanned content [13], seamless 2D and 3D sketch [32], 3D sketch to CAD product [14], multi-view deep volumetric prediction [8], 3D sketching in conceptual design [25], practical sketch-based 3d shape generation [40], 3D sketching with profile curves  [20], sketching 3D scenes [39], handheld MR 3D sketching [35], 3D VR-sketch to 3D shape retrieval [26], multi-view 3D sketching [9], combining 2d & 3d sketching for 3D design [2], interactive surface creation from 3D sketch [5], 3D computational sketch synthesis [3], 3D sketching system [12], AR in-situ 3D sketching [36], 3D sketch fitting in VR [10], controlled direct 3d sketching  [28], multi-view 3D sketching in mobile AR  [1], 3D sketching for industrial design [38], sketching interactions between VR and AR [17], 4D architectural design sketches [31], fine-grained VR sketching [27], 3D curve networks in VR [19], from VR primitives to manifold objects [41], 3D sketch-based modeling with in-situ references [34], curve and surface sketching in VR  [37], etc. While these techniques have shown promising results, they have not explored the possibility of generating 3D VR sketch directly using Artificial Intelligence (AI) technologies. Meanwhile, as a powerful AI model, Generative Adversarial Neural Networks (GANs) [7, 11, 22, 30] have shown great potential in various image synthesis tasks, including style transfer and super-resolution. This paper extends a variation of GANs called RaLSGAN [16, 24] to the domain of 3D VR sketch generation, aiming to produce 3D VR sketches automatically without demanding manual efforts from the digital artist.

2 OVERVIEW

Figure 2 shows the overview of our approach for synthesizing 3D VR sketches employing Generative Relativistic Average Least Square Adversarial Neural Networks (RaLSGANs) [16]. The data collection process begins with the user wearing the Oculus Quest 2 headset, integrating Unity with the Steam VR plugin, and immersing users in the virtual environment as shown in Figure 2 (a). The user can create 3D VR sketches as shown in Figure 2 (b), which are then transformed into 2D images using depth color shader rendering techniques as shown in Figure 2 (c). These 2D depth sketches serve as a dataset for a RaLSGAN as shown in Figure 2 (d), where they are treated as real data for training. The training approach comprises a two-stage pipeline involving a generator and a discriminator, which are trained in an adversarial manner. Given an input of 2D depth sketch image, the generator generates a corresponding 2D depth sketch image, while the discriminator evaluates the realism of the generated depth sketch images compared to real depth sketch images. The generator and discriminator are iteratively trained to optimize their respective objectives, ultimately resulting in a generator capable of producing highly realistic depth sketch images. Once the RaLSGAN is trained, the system can generate random 2D depth sketch images by sampling the latent vector space as shown in Figure 2 (e). Finally, these random 2D depth sketch images are utilized to reconstruct fake 3D VR sketches as shown in Figure 2 (f), resulting in synthesized output that replicates user's original 3D sketches created in virtual environment. The combination of GANs, depth rendering, and VR interface offers a novel approach to creatw 3D VR sketches.

Figure 3
Figure 3: RaLSGAN Training Process.

3 TECHNICAL APPROACH

In our proposed solution, 3D VR sketches are represented as an array of 3D points of strokes. Mathematically, a VR sketch is S = {Pi|i = 1, 2,...}, where Pi = {pj|j = 1, 2,...}. Then for each sketch, we apply a normalization operation to convert sketch S to $S^{\prime }=\lbrace P_i^{\prime } | i=1, 2,... \rbrace$, where $P_i^{\prime }=\lbrace \mathbf {p_j^{\prime }}| j=1, 2,...\rbrace$ and the normalized 3D stroke points $\mathbf {p_j^{\prime }}$ is defined with the following equations:

\begin{equation} \mathbf {p_j^{\prime }}=\frac{k}{d}\left(\mathbf {p_j}-\frac{1}{|S|}\sum _{i=1}^{|S|}\sum _{j=1}^{|P_i|}\frac{\mathbf {p_j}}{|P_i|}\right) \end{equation}
(1)
where k = 12 is sketch scale and sketch depth d = max (pjz) − min (pjz). Then, normalized 3D stroke points S′ are transformed into 2D images I using depth color shader technique where the image is rendered with the following RBG color equation: $I(\mathbf {p_j}^{\prime }_x, \mathbf {p_j}^{\prime }_y)=(\xi (5), \xi (3), \xi (1))$, where
\begin{equation} \xi (n)=1-\max (0,\min (\zeta (n),4-\zeta (n),1)), \end{equation}
(2)
where $\zeta (n)=(n+3(\mathbf {p_j}^{\prime }_z+R)/R){\bmod {6}}$ and depth range R = 6. After rendering the VR sketch into a 2D depth sketch image, these 2D depth images serve as training data. Our approach utilizes a data-driven methodology to create detailed depth sketches in real time. To accomplish this, we employ the Relativistic Average Least Square GAN (RaLSGAN) introduced by Jolicoeur et al. [16] which extends the Standard GAN (SGAN) proposed by Goodfellow et al. [11] by incorporating a relativistic discriminator that estimates the probability of a given fake data being more realistic than randomly sampled real data. In the RaLSGAN, Mean Square Error (MSE) loss functions for discriminator D and generator G are defined as:
\begin{equation} L_D=\left|\left|D(G(z))-\left(\overline{D}(I)-1\right)\right|\right|^2+\left|\left|D(I)-\left(\overline{D}(G(z))+1\right)\right|\right|^2 \end{equation}
(3)
\begin{equation} L_G=\left|\left|D(G(z))-\left(\overline{D}(I)+1\right)\right|\right|^2+\left|\left|D(I)-\left(\overline{D}(G(z))-1\right)\right|\right|^2 \end{equation}
(4)

We convert 195 user's hand-drawing VR sketches into 195 depth sketches to feed the RaLSGAN as the real data given the parameter settings are latent Vector z whose length is 128, 5 convolutional layers for discriminator D (learning rate=0.0001, Adam optimizer) and generator G (learning rate=0.0025, Adam optimizer), the batch size is 16, and 40K iterations. Figure 3 shows the training process of how are the high-resolution sketch depth images (512x512x3) generated with the RaLSGAN trained after a different amount of iterations. As shown in Figure 3 (a), RaLSGAN trained before 1K iterations generates random noises with blurred sketches. Even after the 6K iterations as shown in Figure 3 (d), the sketch depth images are still generated with low qualities. However, as we can tell from Figure 3 (e) to Figure 3 (l) from 8K iterations to 40K iterations, the generated sketch depth images look more and more realistic compared to the ground truth data of the user input 3D VR sketch. Once the RaLSGAN is well-trained, generator G can generate realistic fake 2D depth sketches by sampling the latent vector space. Finally, these random 2D depth sketches are utilized to reconstruct fake 3D VR sketches output that replicates the user's original 3D sketches created in the virtual environment. Mathematically, synthesized 3D stroke points $\mathbf {p_j^{\prime \prime }}$ is defined with the following equations:

\begin{equation} \mathbf {p_j^{\prime \prime }}=(x,y,\eta (I^{\prime }(x,y))), I^{\prime }=G(z), z \sim \mathcal {N}\left([0,1]^{128}\right) \end{equation}
(5)
where given I′(x, y) as (R, G, B), then, depth function η(I′(x, y)) = η(R, G, B) is defined as:
\begin{equation} \eta (R, G, B)=\frac{\kappa }{6}\cdot {{\left\lbrace \begin{array}{@{}l@{\quad }l@{}}0,&{\text{if }}C=0\\ \left({\frac{G-B}{C}}\mod {6}\right),&{\text{if }}V=R\\ \left({\frac{B-R}{C}}+2\right),&{\text{if }}V=G\\ \left({\frac{R-G}{C}}+4\right),&{\text{if }}V=B\end{array}\right.}} \end{equation}
(6)
where κ = 0.8, V = max (R, G, B), and C = V − min (R, G, B).
Figure 4
Figure 4: VR Sketches Real Data Samples (3D View).
Figure 5
Figure 5: VR Sketches Real Data Samples (Depth Color).

4 EXPERIMENT RESULT

Figure 6
Figure 6: VR Sketches Fake Data Samples (Depth Color).

In order to validate the efficacy of our proposed technical approach, a group of experiments is conducted on synthesizing 3D VR sketch using Generative Adversarial Neural Network (GAN). We implemented our proposed approach using Unity 3D with the 2019 version and generate these experiment results with the hardware configurations containing Intel Core i5 CPU, 32GB DDR4 RAM, and NVIDIA GeForce GTX 1650 4GB GDDR6 Graphics Card.

Figure 4 shows part of the data collection involving 30 unique 3D VR sketch samples created by users. The whole data collection contains 195 VR sketches. These VR sketches are three-dimensional representations drawn by individuals while immersed in a VR environment. To aid their creative process, users are provided with 3D chair models as references. As they explore the virtual space, users have the opportunity to examine these chair models from various angles and perspectives, enabling them to gain a comprehensive understanding of the chairs’ design and structure. Inspired by the reference models, users employ our implemented interactive drawing tools within the VR environment to sketch their interpretations of these chairs. Each user brings their artistic flair and imagination to the sketches, resulting in a diverse collection of VR sketches.

Figure 5 presents a set of images representing the training data for the RaLSGAN. Each subfigure contains a VR sketch's corresponding depth color image. These VR sketches from Figure 4 have been converted into depth color representations using our proposed shading technique. The VR sketches as shown in Figure 4, originally created by users while referencing 3D chair models in a VR environment, exhibit diverse artistic styles and interpretations. On the other hand, these depth color images reflect a high-resolution and realistic depiction of these 3D VR sketches. The depth color images of the whole 195 VR sketches serve as the training data for the RaLSGAN, to generate high-quality depth color images from VR sketch synthesis purposes. By using these depth color images during the training process, the RaLSGAN learns to synthesize realistic and visually appealing depth color images that resemble the original sketches shown as 30 examples in Figure 6.

Figure 7 presents a set of fake VR sketches constructed from the 30 depth color images synthesized from RaLSGAN as shown in Figure 6. In these experiments, we have synthesized 200 fake VR sketches using RaLSGAN. Figure 6 and Figure 7 select and present 30 samples among these synthesized results. Figure 7 presents a compelling display featuring a series of synthetic 3D VR sketches. These fake 3D VR sketches have been reconstructed from these 30 depth color images synthesized from the RaLSGAN as shown in Figure 6. Figure 7 showcases a 5x6 grid of these 30 synthetic sketches, each representing a unique interpretation of a 3D VR sketch. These fake VR sketches aim to capture the realism and intricacies observed in the original depth color images. They vary in style, shape, and level of detail, offering a diverse range of visually captivating VR drawing representations. Figure 7 demonstrates the successful synthesis capabilities of the RaLSGAN model along with our proposed approach, transforming the depth color images into creative fake VR sketches for further exploration, analysis, or applications. As shown in these results, RaLSGAN exhibits satisfying stability and generates higher-quality fake VR sketch data samples.

Figure 7
Figure 7: VR Sketches Fake Data Samples (3D View).

5 CONCLUSION

With the growing popularity of Virtual Reality (VR), VR users have created a demand for drawing realistic and immersive 3D VR sketches. This paper aims to address this need by proposing a novel approach utilizing Generative Adversarial Neural Networks (GANs) for synthesizing high-quality 3D VR sketches. We present the results of our experiments, showcasing the synthesized 3D VR sketches generated by our proposed GAN model that delivers satisfying results. In future work, we will explore the potential applications of our 3D VR sketches synthesis approach in multiple areas such as architectural design, virtual prototyping, immersive experiences, art edutainment, etc. There are future research directions to further improve the performance and capabilities of our proposed approach. For example, we can investigate methods to further enhance the realism of synthesized 3D VR sketches. This can involve improving the level of detail and incorporating more complex shading models. Explore techniques to enable interactive VR sketch synthesis, where users can directly manipulate and edit the generated sketches in real time within the VR environment. This will involve developing intuitive and efficient user interfaces, as well as incorporating feedback mechanisms to refine the generated sketches based on user inputs. We will also investigate the potential of extending the current GAN framework to support the cross-domain synthesis of 3D VR sketches which involves the techniques to generate sketches that depict objects or scenes from different domains, such as furniture, architecture, landscapes, human figures, etc. Such advancements will expand the applicability of our approach across various VR design and visualization contexts. We will also investigate methods to enable users to customize and personalize the synthesized 3D VR sketches according to their preferences and design requirements, incorporating user-guided techniques or interactive interfaces that allow users to manipulate various parameters, such as shapes, styles, and levels of detail.

REFERENCES

  • Rawan Alghofaili, Cuong Nguyen, Vojtĕch Krs, Nathan Carr, Radomír Mĕch, and Lap-Fai Yu. 2023. WARPY: Sketching Environment-Aware 3D Curves in Mobile Augmented Reality. In 2023 IEEE Conference Virtual Reality and 3D User Interfaces (VR). IEEE, 367–377.
  • Rahul Arora, Rubaiat Habib Kazi, Tovi Grossman, George Fitzmaurice, and Karan Singh. 2018. Symbiosissketch: Combining 2d & 3d sketching for designing detailed 3d objects in situ. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1–15.
  • Seonghoon Ban and Kyung Hoon Hyun. 2020. 3D computational sketch synthesis framework: Assisting design exploration through generating variations of user input sketch and interactive 3D model reconstruction. Computer-Aided Design 120 (2020), 102789.
  • Oriel Bergig, Nate Hagbi, Jihad El-Sana, and Mark Billinghurst. 2009. In-place 3D sketching for authoring and augmenting mechanical systems. In 2009 8th IEEE International Symposium on Mixed and Augmented Reality. IEEE, 87–94.
  • Sukanya Bhattacharjee and Parag Chaudhuri. 2022. Deep Interactive Surface Creation from 3D Sketch Strokes. In Thirty-First International Joint Conference on Artificial Intelligence { IJCAI-22}, Vienna, Austria. 4908–4914.
  • Xiaokang Chen, Kwan-Yee Lin, Chen Qian, Gang Zeng, and Hongsheng Li. 2020. 3d sketch-aware semantic scene completion via semi-supervised structure prior. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 4193–4202.
  • Antonia Creswell, Tom White, Vincent Dumoulin, Kai Arulkumaran, Biswa Sengupta, and Anil A. Bharath. 2018. Generative Adversarial Networks: An Overview. IEEE Signal Processing Magazine 35, 1 (2018), 53–65. https://doi.org/10.1109/MSP.2017.2765202
  • Johanna Delanoy, Mathieu Aubry, Phillip Isola, Alexei A Efros, and Adrien Bousseau. 2018. 3d sketching using multi-view deep volumetric prediction. Proceedings of the ACM on Computer Graphics and Interactive Techniques 1, 1 (2018), 1–22.
  • Johanna Delanoy, David Coeurjolly, Jacques-Olivier Lachaud, and Adrien Bousseau. 2019. Combining voxel and normal predictions for multi-view 3D sketching. Computers & Graphics 82 (2019), 65–72.
  • Michele Fiorentino, Giuseppe Monno, Pietro A Renzulli, Antonio E Uva, 2003. 3D sketch stroke segmentation and fitting in virtual reality. In International conference on the Computer Graphics and Vision, Vol. 5. Citeseer.
  • Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. Advances in neural information processing systems 27 (2014).
  • Cindy Grimm and Pushkar Joshi. 2012. Just DrawIt: a 3D sketching system. In Proceedings of the International Symposium on Sketch-Based Interfaces and Modeling. 121–130.
  • Christian Hörr. 2009. Considerations on technical sketch generation from 3D scanned cultural heritage. (2009).
  • John F Hughes and Joaquim A Jorge. 2004. From raw 3D-Sketches to exact CAD product models–Concept for an assistant-system. In Eurographics Workshop on Sketch-Based Interfaces and Modeling 2004. Citeseer, 137.
  • Ying Jiang, Congyi Zhang, Hongbo Fu, Alberto Cannavò, Fabrizio Lamberti, Henry YK Lau, and Wenping Wang. 2021. Handpainter-3d sketching in vr with hand-based physical proxy. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–13.
  • Alexia Jolicoeur-Martineau. 2018. The relativistic discriminator: a key element missing from standard GAN. arXiv preprint arXiv:1807.00734 (2018).
  • Hiroki Kaimoto, Kyzyl Monteiro, Mehrad Faridan, Jiatong Li, Samin Farajian, Yasuaki Kakehi, Ken Nakagaki, and Ryo Suzuki. 2022. Sketched Reality: Sketching Bi-Directional Interactions Between Virtual and Physical Worlds with AR and Actuated Tangible UI. In Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology. 1–12.
  • Yongkwan Kim, Sang-Gyun An, Joon Hyub Lee, and Seok-Hyung Bae. 2018. Agile 3D sketching with air scaffolding. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1–12.
  • Sang-Hyun Lee, Taegyu Jin, Joon Hyub Lee, and Seok-Hyung Bae. 2022. WireSketch: Bimanual Interactions for 3D Curve Networks in VR. In Adjunct Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology. 1–3.
  • Florian Levet, Xavier Granier, and Christophe Schlick. 2006. 3D sketching with profile curves. In Smart Graphics: 6th International Symposium, SG 2006, Vancouver, Canada, July 23-25, 2006. Proceedings 6. Springer, 114–125.
  • Bo Li, Yijuan Lu, Fuqing Duan, Shuilong Dong, Yachun Fan, Lu Qian, Hamid Laga, Haisheng Li, Yuxiang Li, P Lui, et al. 2016. SHREC’16 Track: 3D Sketch-Based 3D Shape Retrieval. In Eurographics Workshop on 3D Object Retrieval (3DOR) 2016.
  • Wanwan Li. 2021. Image Synthesis and Editing with Generative Adversarial Networks (GANs): A Review. In 2021 Fifth World Conference on Smart Trends in Systems Security and Sustainability (WorldS4). 65–70. https://doi.org/10.1109/WorldS451998.2021.9514052
  • Wanwan Li. 2021. Pen2VR: A Smart Pen Tool Interface for Wire Art Design in VR. In Smart Tools and Apps for Graphics - Eurographics Italian Chapter Conference, Patrizio Frosini, Daniela Giorgi, Simone Melzi, and Emanuele Rodolà (Eds.). The Eurographics Association. https://doi.org/10.2312/stag.20211482
  • Wanwan Li. 2023. Terrain Synthesis for Treadmill Exergaming in Virtual Reality. In 2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW). 263–269. https://doi.org/10.1109/VRW58643.2023.00064
  • Marcello Lorusso, Marco Rossoni, Marina Carulli, Monica Bordegoni, Giorgio Colombo, et al. 2021. A virtual reality application for 3D sketching in conceptual design. Computer-Aided Design and Applications 19, 2 (2021), 256–268.
  • Ling Luo, Yulia Gryaditskaya, Yongxin Yang, Tao Xiang, and Yi-Zhe Song. 2020. Towards 3d vr-sketch to 3d shape retrieval. In 2020 International Conference on 3D Vision (3DV). IEEE, 81–90.
  • Ling Luo, Yulia Gryaditskaya, Yongxin Yang, Tao Xiang, and Yi-Zhe Song. 2021. Fine-grained vr sketching: Dataset and insights. In 2021 International Conference on 3D Vision (3DV). IEEE, 1003–1013.
  • Prasad S Onkar and Dibakar Sen. 2016. Controlled direct 3d sketching with haptic and motion constraints. International Journal of Computer Aided Engineering and Technology 8, 1-2 (2016), 33–55.
  • Alfred Oti and Nathan Crilly. 2021. Immersive 3D sketching tools: Implications for visual thinking and communication. Computers & Graphics 94 (2021), 111–123.
  • Zhaoqing Pan, Weijie Yu, Xiaokai Yi, Asifullah Khan, Feng Yuan, and Yuhui Zheng. 2019. Recent Progress on Generative Adversarial Networks (GANs): A Survey. IEEE Access 7 (2019), 36322–36333. https://doi.org/10.1109/ACCESS.2019.2905015
  • S Rasoulzadeh, M Wimmer, and I Kovacic. 2023. Strokes2Surface: Recovering Curve Networks From 4D Architectural Design Sketches. arXiv preprint arXiv:2306.07220 (2023).
  • S Tano, T Kodera, T Nakashima, I Kawano, K Nakanishi, G Hamagishi, M Inoue, A Watanabe, T Okamoto, K Kawagoe, et al. 2003. Godzilla: Seamless 2D and 3D sketch environment for reflective and creative design work. In INTERACT’03. Citeseer, 311–318.
  • Karl DD Willis, Pradeep Kumar Jayaraman, Joseph G Lambourne, Hang Chu, and Yewen Pu. 2021. Engineering sketch generation for computer-aided design. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2105–2114.
  • Kai Wu and Zhanglin Cheng. 2022. RefAR: 3D Sketch-Based Modeling with In-situ References. In 2022 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). IEEE, 507–511.
  • Min Xin, Ehud Sharlin, and Mario Costa Sousa. 2008. Napkin sketch: handheld mixed reality 3D sketching. In Proceedings of the 2008 ACM symposium on Virtual reality software and technology. 223–226.
  • Brandon Yee, Yuan Ning, and Hod Lipson. 2009. Augmented reality in-situ 3D sketching of physical objects. In Intelligent UI workshop on sketch recognition, Vol. 1. Citeseer.
  • Emilie Yu, Rahul Arora, Tibor Stanko, J Andreas Bærentzen, Karan Singh, and Adrien Bousseau. 2021. Cassie: Curve and surface sketching in immersive environments. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–14.
  • Xue Yu, Stephen DiVerdi, Akshay Sharma, and Yotam Gingold. 2021. Scaffoldsketch: Accurate industrial design drawing in vr. In The 34th Annual ACM Symposium on User Interface Software and Technology. 372–384.
  • Robert C Zeleznik, Kenneth P Herndon, and John F Hughes. 2006. SKETCH: An interface for sketching 3D scenes. In ACM SIGGRAPH 2006 Courses. 9–es.
  • Yue Zhong, Yonggang Qi, Yulia Gryaditskaya, Honggang Zhang, and Yi-Zhe Song. 2020. Towards practical sketch-based 3d shape generation: The role of professional sketches. IEEE Transactions on Circuits and Systems for Video Technology 31, 9 (2020), 3518–3528.
  • Yuzhen Zhu, Xiangjun Tang, Jing Zhang, Ye Pan, Jingjing Shen, and Xiaogang Jin. 2022. 3DBrushVR: From Virtual Reality Primitives to Complex Manifold Objects. In 2022 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). IEEE, 423–428.

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org.

BDIOT 2023, August 11–13, 2023, Beijing, China

© 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM.
ACM ISBN 979-8-4007-0801-5/23/08…$15.00.
DOI: https://doi.org/10.1145/3617695.3617723