key: cord-133894-wsnyq01s authors: Arora, Rahul; Singh, Karan title: Mid-Air Drawing of Curves on 3D Surfaces in AR/VR date: 2020-09-18 journal: nan DOI: nan sha: doc_id: 133894 cord_uid: wsnyq01s Complex 3D curves can be created by directly drawing mid-air in immersive environments (AR/VR). Drawing mid-air strokes precisely on the surface of a 3D virtual object however, is difficult; necessitating a projection of the mid-air stroke onto the user"intended"surface curve. We present the first detailed investigation of the fundamental problem of 3D stroke projection in AR/VR. An assessment of the design requirements of real-time drawing of curves on 3D objects in AR/VR is followed by the definition and classification of multiple techniques for 3D stroke projection. We analyze the advantages and shortcomings of these approaches both theoretically and via practical pilot testing. We then formally evaluate the two most promising techniques spraycan and mimicry with 20 users in VR. The study shows a strong qualitative and quantitative user preference for our novel stroke mimicry projection algorithm. We further illustrate the effectiveness and utility of stroke mimicry, to draw complex 3D curves on surfaces for various artistic and functional design applications. . Drawing curves mid-air that lie precisely on the surface of a virtual 3D object in AR/VR is di icult (a). Projecting mid-air 3D strokes (black) onto 3D objects is an under-constrained problem with many seemingly reasonable solutions (b). We analyze this fundamental AR/VR problem of 3D stroke projection, define and characterize multiple novel projection techniques (c), and test the two most promising approaches-spraycan shown in blue and mimicry shown in red in (b)-(d)-using a quantitative study with 20 users (d). The user-preferred mimicry technique a empts to mimic the 3D mid-air stroke as closely as possible when projecting onto the virtual object. We showcase the importance of drawing curves on 3D surfaces, and the utility of our novel mimicry approach, using multiple artistic and functional applications (e) such as interactive shape segmentation (top) and texture painting (bo om). Horse model courtesy Cyberware, Inc. Spiderman bust base model © David Ruiz Olivares (CC BY 4.0). Complex 3D curves can be created by directly drawing mid-air in immersive environments (AR/VR). Drawing mid-air strokes precisely on the surface of a 3D virtual object however, is di cult; necessitating a projection of the mid-air stroke onto the user "intended" surface curve. We present the rst detailed investigation of the fundamental problem of 3D stroke projection in AR/VR. An assessment of the design requirements of real-time drawing of curves on 3D objects in AR/VR is followed by the de nition and classi cation of multiple techniques for 3D stroke projection. We analyze the advantages and shortcomings of these approaches both theoretically and via practical pilot testing. We then formally evaluate the two most promising techniques spraycan and mimicry with 20 users in VR. e study shows a strong qualitative and quantitative user preference for our novel stroke mimicry projection algorithm. We further illustrate the e ectiveness and utility of stroke mimicry, to draw complex 3D curves on surfaces for various artistic and functional design applications. CCS Concepts: •Human-centered computing → Virtual reality; •Computing methodologies → Graphics systems and interfaces; Shape modeling; Drawing is a fundamental tool of human visual expression and communication. Digital sketching with pens, styli, mice, and even ngers in 2D is ubiquitous in visually creative computing applications. Drawing or painting on 3D virtual objects for example, is critical to interactive 3D modeling, animation, and visualization, where its uses include: object selection, annotation, and segmentation [Heckel et al. 2013; Jung et al. 2002; Meng et al. 2011] ; 3D curve and surface design [Igarashi et al. 1999; Nealen et al. 2007] ; strokes for 3D model texturing or painterly rendering [Kalnins et al. 2002] ( Figure 1e ). In 2D, digitally drawn on-screen strokes are WYSIWYG mapped onto 3D virtual objects, by projecting 2D stroke points through the given view onto the virtual object(s) (Figure 2a) . Sketching in immersive environments (AR/VR) has the mystical aura of a magical wand, allowing users to draw directly mid-air in 3D. Mid-air drawing thus has the potential to signi cantly disrupt interactive 3D graphics, as evidenced by the increasing popularity of AR/VR applications such as Tilt Brush [Google 2020] and ill [Oculus 2020b] . A fundamental requirement for numerous interactive 3D applications in AR/VR however, remains the ability to directly draw, or project drawn 3D strokes, precisely on 3D virtual objects. While directly drawing on a physical 3D object is reasonably easy, it is near impossible without haptic constraints to draw directly on a virtual 3D object ( Figure 3) . Furthermore, unlike 2D drawing, where the WYSIWYG view-based projection of 2D strokes onto 3D objects is unambiguously clear, the user-intended mapping of a mid-air 3D stroke onto a 3D object is less obvious. We thus present the rst detailed investigation into plausible user-intended projections of mid-air strokes on to 3D virtual objects. ( c ) Fig. 2 . Stroke projection using a 2D interface is typically WYSIWYG: 2D points along a user stroke (a, inset) are ray-cast through the given view to create corresponding 3D curve points on the surface of 3D scene objects (a). Even small errors or noise in 2D strokes can cause large discontinuities in 3D, especially near ridges and sharp features (b). Complex curves spanning many viewpoints, or with large scale variations in detail, o en require the curve to be drawn in segments from multiple user-adjusted viewpoints (c). Interfaces for 2D/3D curve creation in general, use perceptual insights or geometric assumptions like smoothness and planarity, to project, neaten, or otherwise process sketched strokes. Some applications wait for user stroke completion before processing it in entirety, for example when ing splines [Bae et al. 2008 ]. Our goal is to establish an application agnostic, base-line projection approach for mid-air 3D strokes. We thus assume a stroke is processed while being drawn and inked in real-time, i.e., the output curve corresponding to a partially drawn stroke is xed/inked in real-time, based on partial stroke input [ iel et al. 2011] . One might further conjecture that all "reasonable" and mostly continuous projections would produce similar results, as long as users are given interactive visual feedback of the projection. is is indeed true for tasks requiring discrete point-on-surface selection, where users can freely re-position the drawing tool until its interactively visible projection corresponds to user-intent. Realtime curve drawing, however, is very sensitive to the projection technique, where any mismatch between user intention and algorithmic projection, is continuously inked into the projected curve ( Figure 1d ). 2D Strokes Projected onto 3D Objects: e standard user-intended mapping of a 2D on-screen stroke is a raycast projection through the given monocular viewpoint, onto the visible surface of 3D objects. Raycasting is WYSIWYG (What You See Is What You Get) in that the 3D curve visually matches the 2D stroke from said viewpoint (see Figure 2a ). Ongoing research on mapping 2D strokes to 3D objects assumes this fundamental view-centric projection, focusing instead on speci c problems such as creating spatially coherent curves around ridge/valley features (where small 2D error can cause large 3D depth error upon projection, Figure 2b ); or drawing complex curves with large scale variation (where multiple viewpoint changes are needed while drawing, Figure 2c ). ese problems are mitigated by the direct 3D input and viewing exibility of AR/VR, assuming the mid-air stroke to 3D object projection matches user intent. 3D Strokes Projected onto 3D Objects: Physical analogies motivate existing approaches to de ning a user-intended projection from 3D points in a mid-air stroke to 3D points on a virtual object (Figure 4) . Gra ti-style painting with a spraycan, is arguably the current standard, deployed in commercial immersive paint and sculpt soware such as Oculus Medium [2020a] and Gravity Sketch [2020] . A closest-point projection approximates drawing with the tool on the Fig. 3 . Mid-air drawing precisely on a 3D virtual object is di icult (faint regions of strokes are above or below the surface), regardless of drawing quick smooth strokes blue, or slow detailed strokes purple. Deliberately slow drawing is further detrimental to stroke aesthetic (right). 3D object, without actual physical contact (used by the "guides" tool in Tilt Brush [2020] ). Like view-centric 2D stroke projection, these approaches are context-free: processing each mid-air point independently. e AR/VR drawing environment comprising six-degree of freedom controller input and unconstrained binocular viewing, is however, signi cantly richer than 2D sketching. e user-intended projection of a mid-air stroke ( § 3) as a result is complex, in uenced by the ever-changing 3D relationship between the view, drawing controller and virtual object. We therefore argue the need for historical context (i.e., the partially drawn stroke and its projection) in determining the projection of a given stroke point. We balance the use of this historical context, with the overarching goal of a general purpose projection that makes li le or no assumption on the nature of the user stroke or its projection. We thus explore anchored projection techniques, that minimally use the most recently projected stroke point, as context for projecting the current stroke point ( § 4). We evaluate various anchored projections, both theoretically and practically by pilot testing. Our most promising and novel approach anchored-smooth-closest-point (also called mimicry), captures the natural tendency of a user stroke to mimic the shape of the desired projected curve. A formal user study ( § 5), shows mimicry to perform signi cantly be er than spraycan (the current baseline) in producing curves that match user intent ( § 6). is paper thus contributes, to the best of our knowledge, the rst principled investigation of real-time inked techniques to project 3D mid-air strokes drawn in AR/VR onto 3D virtual objects, and a novel stroke projection benchmark for AR/VR: mimicry. Overview. Following a review of related work ( § 2), we analyze the pros and cons of context-free projection ( § 3), laying the foundation for our novel anchored projection, mimicry ( § 4). We formally compare mimicry against the current baseline spraycan ( § 5). e study results and discussion ( § 6) are followed by applications showcasing the utility of mimicry ( § 7). We conclude with limitations and directions for future work ( § 8). Our work is related to research on drawing and sculpting in immersive realities, interfaces for drawing curves on, near, and around surfaces, and sketch-based modelling tools. Immersive creation has a long history in computer graphics. Immersive 3D sketching was pioneered by the HoloSketch system [Deering 1995] , which used a 6-DoF wand as the input device for creating polyline sketches, 3D tubes, and primitives. In a similar vein, various subsequent systems have explored the creation of freeform 3D curves and swept surfaces [Google 2020; Keefe et al. 2001; Schkolne et al. 2001] . While directly turning 3D input to creative output is acceptable for ideation, the inherent imprecision of 3D sketching is quickly apparent when more structured creation is desired. e perceptual and ergonomic challenges in precise control of 3D input is well-known [Arora et al. 2017; Keefe et al. 2007; Machuca et al. 2018 Machuca et al. , 2019 Wiese et al. 2010] , resulting in various methods for correcting 3D input. Input 3D curves have been algorithmically regularized to snap onto existing geometry, as with the Free-Drawer [2001] system, or constrained physically to 2D input with additional techniques for "li ing" these curves into 3D [Arora et al. 2018; Jackson and Keefe 2016; Kwan and Fu 2019; Paczkowski et al. 2011] . Haptic rendering devices [Kamuro et al. 2011; Keefe et al. 2007 ] and tools utilizing passive physical feedback [Grossman et al. 2002] are an alternate approach to tackling the imprecision of 3D inputs. We are motivated by similar considerations. Arora et al. [2017] demonstrated the di culty of creating curves that lie exactly on virtual surfaces in VR, even when the virtual surface is a plane. is observation directly motivates our exploration of techniques for projecting 3D strokes onto surfaces, instead of coercing users to awkwardly draw exactly on a virtual surface. Curve creation and editing on or near the surface of 3D virtual objects is fundamental for a variety of artistic and functional shape modeling tasks. Functionally, curves on 3D surfaces are used to model or annotate structural features [Gal et al. 2009; Stanculescu et al. 2013] , de ne trims and holes [Schmidt and Singh 2010] , and to provide handles for shape deformation [Kara and Shimada 2007; Nealen et al. 2007; Singh and Fiume 1998 ], registration [Gehre et al. 2018 ] and remeshing [Krishnamurthy and Levoy 1996; Takayama et al. 2013 ]. Artistically, curves on surfaces are used in painterly rendering [Gooch and Gooch 2001] , decal creation [Schmidt et al. 2006 ], texture painting [Adobe 2020] , and even texture synthesis [Fisher et al. 2007 ]. Curve on surface creation in this body of research typically uses the established view-centric WYSIWYG projection of on-screen sketched 2D strokes. While the sketch view-point in these interfaces is interactively set by the user, there has been some e ort in automatic camera control for drawing [Ortega and Vincent 2014] , auto-rotation of the sketching view for 3D planar curves , and user assistance in selecting the most sketchable viewpoints [Bae et al. 2008 ]. Immersive 3D drawing enables direct, view-point independent 3D curve sketching, and is thus an appealing alternative to these 2D interfaces. Our work is also related to drawing curves around surfaces. Such techniques are important for a variety of applications: modeling string and wire that wrap around objects [Coleman and Singh 2006] ; curves that loosely conform to virtual objects or de ne collision-free paths around objects [Krs et al. 2017] ; curve pa erns for clothing design on a 3D mannequin model [Turquin et al. 2007] ; curves for layered modeling of shells and armour [De Paoli and Singh 2015] ; and curves for the design and grooming of hair and fur [Fu et al. 2007; Schmid et al. 2011; Xing et al. 2019] . Some approaches such as SecondSkin [2015] and Skippy [2017] use insights into spatial relationship between a 2D stroke and the 3D object, to infer a 3D curve that lies on and around the surface of the object. Other techniques like Cords [2006] or hair and clothing design [Xing et al. 2019 ] are closer to our work, in that they drape 3D curve input on and around 3D objects using geometric collisions or physical simulation. In contrast, this paper is application agnostic, and remains focused on the general problem of projecting a drawn 3D stroke to a real-time inked curve on the surface of a virtual 3D object. While we do not address curve creation with speci c geometric relationships to the object surface (like distance-o set curve), our techniques can be extended to incorporate geometry-speci c terms ( § 8). Sketch-based 3D modeling is a rich ongoing area of research (see survey by Olsen et al. [2009] ). Typically, these systems interpret 2D sketch inputs for various shape modeling tasks. One could categorize these modeling approaches as single-view (akin to traditional pen on paper) [Andre and Saito 2011; Chen et al. 2013; Schmidt et al. 2009; Xu et al. 2014] or multi-view (akin to 3D modeling with frequent view manipulation) [Bae et al. 2008; Fan et al. 2013 Fan et al. , 2004 Igarashi et al. 1999; Nealen et al. 2007 ]. Single-view techniques use perceptual insights and geometric properties of the 2D sketch to infer its depth in 3D, while multi-view techniques explicitly use view manipulation to specify 3D curve a ributes from di erent views. While our work utilizes mid-air 3D stroke input, the ambiguity of projection onto surfaces connects it to the interpretative algorithms designed for sketch-based 3D modeling. We aim to take advantage of the immersive interaction space by allowing view manipulation as and when desired, independent of geometry creation. We rst formally state the problem of projecting a mid-air 3D stroke onto a 3D virtual object. Let M = (V , E, F ) be a 3D object, represented as a manifold triangle mesh embedded in R 3 . A user draws a piece-wise linear mid-air stroke by moving a 6-DoF controller or drawing tool in AR/VR. e 3D stroke P ⊂ R 3 is a sequence of n points (p i ) n−1 i=0 , connected by line segments. Corresponding to each point p i ∈ R 3 , is a system state are the positions of the headset and the controller, respectively, and h i , c i ∈ Sp(1) are their respective orientations, represented as unit quaternions. Also, without loss of generality, assume c i = p i , i.e. the controller positions describe the stroke points p i . We want to de ne a projection π , which transforms the sequence of points (p i ) n−1 i=0 to a corresponding sequence of points (q i ) n−1 i=0 on the 3D virtual object, i.e. q i ∈ M. Consecutive points in this sequence are connected by geodesics on M, they describe the projected curve Q ⊂ M. e aim of a successful projection method of course, is to match the undisclosed user-intended curve. e projection is also designed for real-time inking of curves: points p i are processed upon input and projected in real-time (under 100ms) to q i using the current system state S j , and optionally, prior system states (S j ) i−1 j=0 , stroke points (p j ) i−1 j=0 and their projections (q j ) i−1 j=0 . Stroke dynamics, captured from the controller's inertial sensors, or as nite di erences of stroke position, have been e ective in interactive sketch neatening [Arora et al. 2018; iel et al. 2011 ]. We do not however, explicitly model stroke dynamics in our proposed projections, since early pilot testing did not suggest a relationship between stroke velocity/acceleration and intended stroke projection. Context-free techniques project points independent of each other, simply based on the spatial relationships between the controller, HMD, and 3D object ( Figure 4 ). We can further categorize techniques as raycast or proximity based. 3.1.1 Raycast Projections. View-centric projection in 2D interfaces project points from the screen along a ray from the eye through the screen point, to where the ray rst intersects the 3D object. In an immersive se ing, raycast approaches similarly use a ray emanating from the 3D stroke point to intersect 3D objects. is ray (o, d) with origin o and direction d can be de ned in a number of ways. Similar to pointing behavior, Occlude de nes this ray from the eye through the controller origin (also the stroke point, Figure 4a ) If the ray intersects M, then the closest intersection to p i de nes q i . In case of no intersection, p i is ignored in de ning the projected curve, i.e., q i is marked unde ned and the projected curve connects q i−1 to q i+1 (or the proximal index points on either side of i for which projections are de ned). e Spraycan approach treats the controller like a spraycan, de ning the ray like a nozzle direction in the local space of the controller (Figure 4b ). For example the ray could be de ned as (c i , f i ), where the nozzle f i = c i ·[0, 0, 1] T is the controller's local z-axis (or forward direction). Alternately, Head-centric projection can de ne the ray using the HMD's view direction as (h i , h i · [0, 0, 1] T ) (Figure 4c ). Pros and Cons: e strengths of raycasting are: a predictable visual/proprioceptive sense of ray direction; a spatially continuous mapping between user input and projection rays; and AR/VR scenarios where it is di cult or undesirable to reach and draw close to the virtual object. e biggest limitation of raycast projection stems from the controller/HMD-based ray direction being completely agnostic of the shape or location of the 3D object. Projected curves can consequently be very di erent in shape and size from drawn strokes, and ill-de ned for stroke points with no ray-object intersection. 3.1.2 Proximity-Based Projections. In 2D interfaces, the on-screen 2D strokes are typically distant to the viewed 3D scene, necessitating some form of raycast projection onto the visible surface of 3D objects. In AR/VR, however, users are able to reach out in 3D and directly draw the desired curve on the 3D object. While precise mid-air drawing on a virtual surface is very di cult in practice (Figure 3) , projection methods based on proximity between the mid-air stroke and the 3D object are certainly worth investigation. e simplest proximity-based projection technique Snap, projects a stroke point p i to its closest-point in M (Figure 4d ). where d(·, ·) is the Euclidean distance between two points. Unfortunately, for triangle meshes, closest-point projection tends to snap to the edges of the mesh (blue curve inset), resulting in unexpectedly jaggy projected curves, even for smooth 3D input strokes (black curve inset) ]. ese discontinuities are due to the discrete nature of the mesh representation, as well spatial singularities in closest point computation even for smooth 3D objects. We mitigate this problem by formulating an extension of Panozzo et al. 's Phong projection [2013] in § 3.2, that simulates projection of points onto an imaginary smooth surface approximated by the triangle mesh. We denote this smooth-closest-point projection as π SC P (red curve inset). Pros and Cons: e biggest strength of proximity-based projection is it exploits the immersive concept of drawing directly on or near an object, using the spatial relationship between a 3D stroke point and the 3D object to determine projection. e main limitation is that since users rarely draw precisely on the surface, discontinuities and local extrema persist when projecting distantly drawn stoke points, even when using smooth-closest-point. In § 4.1, we address this problem using stroke mimicry to anchor distant stroke points close to the object to be nally projected using smooth-closest-point. Our goal with smooth-closest-point projection is to de ne a mapping from a 3D point to a point on M that approximates the closest point projection but tends to be functionally smooth, at least for points near the 3D object. We note that computing the closest point to a Laplacian-smoothed mesh proxy, for example, will also provide a smoother mapping than π snap , but a potentially poor closest-point approximation to the original mesh. Phong projection, introduced by , addresses these goals for points expressible as weighted-averages of points Def. P P hon Def. (a) Computing weighted averages in on M, but we extend their technique to de ne a smooth-closestpoint projection for points in the neighbourhood of the mesh. For completeness, we rst present a brief overview of their technique. Phong projection is a two-step approach to map a point y 3 ∈ R 3 to a manifold triangle mesh M embedded in R 3 , emulating closestpoint projection on a smooth surface approximated by the triangle mesh. First, M is embedded in a higher dimensional Euclidean space R d such that Euclidean distance (between points on the mesh) in R d be er approximates geodesic distances in R 3 . Second, analogous to vertex normal interpolation in Phong shading, a smooth surface is approximated by blending tangent planes across edges. Barycentric coordinates at a point within a triangle are used to blend the tangent planes corresponding to the three edges incident to the triangle. We extend the rst step to a higher dimensional embedding of not just the triangle mesh M, but a tetrahedral representation of an o set volume around the mesh M ( Figure 5 ). e second step remains the same, and we refer the reader to for details. For clarity, we refer to M embedded in R 3 as M 3 , and the embedding in R d as M d . Panozzo et al. compute M d by rst embedding a subset of the vertices in R D using metric multi-dimensional scaling (MDS) [Cox and Cox 2008] , aiming to preserve the geodesic distance between the vertices. is subset consists of the high-curvature vertices of M. e embedding of the remaining vertices is then computed using LS-meshes [Sorkine and Cohen-Or 2004] . For the problem of computing weighted averages on surfaces, one only needs to project 3D points of the form x d i is de ned as the point on M d with the same implicit coordinates (triangle and barycentric coordinates) as x 3 i does on M 3 . erefore, their approach only embeds M into R d (Figure 5a,c) . In contrast, we want to project arbitrary points near M 3 onto it using the Phong projection. erefore, we compute the o set surfaces at signed-distance ±µ from M. We then compute a tetrahedral mesh T 3 M of the space between these two surfaces in R 3 . In the nal step, we embed the vertices of T M in R d using MDS and LS-Meshes as described above. Note that all of the above steps are realized in a precomputation. Now, given a 3D point y 3 within a distance µ from M 3 , we situate it within T 3 M , use tetrahedral Barycentric coordinates to infer its location in R d , and then compute its Phong projection (Figure 5b,c) . We fallback to closest-point projection for points outside T 3 M , since Phong projection converges to closest-point projection when far from M. Furthermore, we set µ large enough to easily handle our smooth-closest-point queries in § 4.1. We implemented the four di erent context-free projection approaches in Figure 4 , and had 4 users informally test each, drawing a variety of curves on the various 3D models seen in this paper. alitatively, we made a number of observations: -Head-centric and Occlude projections become unpredictable if the user is inadvertently changing their viewpoint while drawing. ese projections are also only e ective when drawing frontally on an object, like with a 2D interface. Neither as a result exploits the potential gains of mid-air drawing in AR/VR. -Spraycan projection was clearly the most e ective context-free technique. Commonly used for gra ti and airbrushing, usually on fairly at surfaces, we noted however, that consciously reorienting the controller while drawing on or around complex objects was both cognitively and physically tiring. -Snap projection was quite sensitive to changes in the distance of the stroke from the object surface, and in general produced the most undulating projections due to closest-point singularities. -All projections converge to the mid-air user stroke when it precisely conforms to the surface of the 3D object. But as the distance between the object and points on the mid-air stroke increases, their behavior diverges quickly. -While users did draw in the vicinity and mostly above the object surface, they rarely drew precisely on the object. e average distance of stroke points from the target object was observed to be 4.8 cm in a subsequent user study ( § 5). -e most valuable insight however, was that the user stroke in mid-air o en tended to mimic the expected projected curve. Context-free approaches, by design, are unable to capture this mimicry, i.e., the notion that the change between projected point as we draw a stroke is commensurate with the change in the 3D points along the stroke. is inability due to a lack of curve history or context, materializes as problems in di erent forms. 3.3.1 Projection Discontinuities. Proximal projection (including smooth-closest-point) can be highly discontinuous with increasing distance from the 3D object, particularly in concave regions (Figure 6a ). Mid-air drawing along valleys without staying in precise contact with virtual object is thus extremely di cult. Raycast projections can similarly su er large discontinuous jumps across occluded regions (in the ray direction) of the object (Figure 6d ). While this problem theoretically exists in 2D interfaces as well, it is less observed in practice for two reasons: 2D drawing on a constraining physical surface is signi cantly more precise than midair drawing in AR/VR [Arora et al. 2017] ; and artists minimize such discontinuities by carefully choosing appropriate views (raycast directions) before drawing each curve. Automatic diretion control of view or controller, while e ective in 2D [Ortega and Vincent 2014] ), is detrimental to a sense of agency and presence in AR/VR. Snapping. Proximity-based methods also tend to get stuck on sharp (or high curvature) convex features of the object (Figure 6b ). While this can be useful to trace along a ridge feature, it is particularly problematic for general curve-on-surface drawing. 3.3.3 Projection depth disparity. e relative orientation between the 3D object surface and raycast direction can cause large depth disparities between parts of user strokes and curves projected by raycasting ( Figure 6c ). Such irregular bunching or spreading of points on the projected curve also goes against our observation of stroke mimicry. Users can arguably reduce this disparity by continually orienting the view/controller to keep the projection ray well aligned with object surface normal. Such re-orientation however can be tiring, ergonomically awkward, and deviates from 2D experience, where pen/brush tilt only impacts curve aesthetic, and not shape. We noted that the Occlude and Spraycan techniques were complementary: drawing with Occlude on parts of an object frontal to the view provided good comfort and control, which degraded when drawing closer to the object silhoue e, and observed the opposite when drawing with Spraycan. We thus implemented a hybrid projection, where the ray direction was interpolated between Occlude and Spraycan based on alignment with the most recently projected smooth surface normal. Unfortunately, the di erence between Occlude and Spraycan ray directions was o en large enough to make even smooth ray transitions abrupt and hard to control. All these problems point to the projection function ignoring the shape of the mid-air stroke P and the projected curve Q, and can be addressed using projection functions that explicitly incorporate both. We call these functions anchored. e limitations of context-free projection can be addressed by equipping stroke point projection with the context/history of recently drawn points and their projections. In this paper we minimally use only the most recent stroke point p i−1 and its projection q i−1 , as context to anchor the current projection. Any reasonable context-free projection can be used for the rst stroke point p 0 . We use spraycan π spr a , our preferred context-free technique. For subsequent points (i > 0), we compute: where ∆p i = (p i − p i−1 ). We then compute q i as a projection of the anchored stroke point r i onto M, that a empts to capture ∆p i ≈ ∆q i . Anchored projection captures our observation that the mid-air user stroke tends to mimic the shape of their intended curve on surface. While users to do not adhere consciously to any precise geometric formulation of mimicry, we observe that users o en draw the intended projected curve as a corresponding stroke on an imagined o set or translated surface (Figure 7) . A good general projection for the anchored point r i to M thus needs to be continuous, predictable, and loosely capture this notion of mimicry. Controller sampling rate in current AR/VR systems is 50Hz or more, meaning that even during ballistic movements, the distance ∆p i for any stroke sample i is of the order of a few millimetres. Consequently, the anchored stroke point r i is typically much closer to M, than the stroke point p i , making closest-point snap projection a compelling candidate for projecting r i . Such an anchored closestpoint projection explicitly minimizes ∆p i − ∆q i , but precise minimization is less important than avoiding projection discontinuities and undesirably snapping, even for points close to the mesh. Our formulation of a smooth-closest-point projection π SC P in § 3.2 addresses these goals precisely. Also note that the maximum observed ∆p for the controller readily de nes the o set distance µ for our pre-computed tet mesh T 3 M . We de ne mimicry projection as (3) We further explore re nements to mimicry projection, that might improve curve projection in certain scenarios. Planar curves are very common in design and visualization [McCrae et al. 2011] . We can locally encourage planarity in mimicry projection by constructing a plane N i with normal ∆p i × ∆p i−1 (i.e. the local plane of the mid-air stroke) and passing through the anchor point r i (Figure 7b ). We then intersect N i with M. q i is de ned as the closest-point to r i on the intersection curve that contains q i−1 . Note, we use π spr a (p i ) for i < 2, and we retain the most recently de ned normal direction (N i−1 or prior) when N i = ∆p i × ∆p i−1 is unde ned. We nd this method works well for near-planar curves, but the plane is sensitive to noise in the mid-air stroke (Figure 9f) , and can feel sticky or less responsive for non-planar curves. O set and Parallel surface drawing captures the observation that users tend to draw an intended curve as a corresponding midair stroke on an imaginary o set or parallel surface of the object M. While we do not expect users to draw precisely on such a surface, we note that is unlikely a user would intentionally draw orthogonal to such a surface along the gradient of the 3D object. In scenarios when a user is sub-consciously drawing on a o set surface of M (an isosurface of its signed-distance function d M (·)), we can remove the component of a user stroke segment that lies along the gradient ∇d M , when computing the desired anchor point r i in Equation 4 as ( Figure 7c ): We can similarly locally constrain user strokes to a parallel surface of M in Equation 5 as: Note that the di erence from Eq. 4 is the position where ∇d M is computed, as shown in Figure 7d . A parallel surface be er matched user expectation than an o set surface in our pilot testing, but both techniques produced poor results when user drawing deviated from these imaginary surfaces (Figure 9g -l). For completeness, we also investigated raycast alternatives to projection of the anchored stroke point r i . We used similar priors of local planarity and o set or parallel surface transport as with mimicry re nement, to de ne ray directions. Figure 8 shows two such options. In Figure 8a , we cast a ray in the local plane of motion, orthogonal to the user stroke, given by ∆p i . We construct the local plane containing r i spanned by ∆p i and p i−1 − q i−1 , and then de ne the direction orthogonal to ∆p i in this plane. Since r i may be inside M, we cast two rays bi-directionally (r If both rays successfully intersect M, we choose q i to be the point closer to r i , a heuristic that works well in practice. As with locally planar mimicry projection, this technique su ered from instability in the local plane. Motivated by mimicry, in Figure 8b , we also explored parallel transport of the projection ray direction along the user stroke. For i > 0, we parallel transport the previous projection direction q i−1 − p i−1 along the mid-air curve by rotating it with the rotation that aligns ∆p i−1 with ∆p i . Once again bi-directional rays are cast from r i , and q i is set to the closer intersection with M. In general, we found that all raycast projections, even when anchored, su ered from unpredictability over long strokes and stroke discontinuities when there are no ray-object intersections (Figure 9n ,o). In summary, extensive pilot testing of the anchored techniques revealed that they seemed generally be er than context-free approaches, specially when users drew further away from the 3D object. Among the anchored techniques, stroke mimicry captured as an anchored-smooth-closest-point projection, proved to be theoretically elegant, and practically the most resilient to ambiguities of user intent and di erences of drawing style among users. Anchored closest-point can be a reasonable proxy to anchored smooth-closestpoint when pre-processing the 3D virtual objects is undesirable. Our techniques are implemented in C#, with interaction, rendering, and VR support provided by the Unity Engine. For the smooth closest-point operation, we modi ed Panozzo et al.'s [2013] reference implementation, which includes pre-processing code wri en in MATLAB and C++, and real-time code in C++. e real-time projection implementation is exposed to our C# application via a compiled dynamic library. In their implementation, as well as ours, ( o ) Fig. 9 . Mimicry vs. other anchored stroke projections: Mid-air strokes are shown in black and mimicry curves in red. Anchored closest-point (blue), is similar to mimicry on smooth, low-curvature meshes (a,b) but degrades with mesh detail/noise (c,d). Locally planar projection (blue) is susceptible local plane instability (e,f). Parallel (purple h,k) or o set (blue i,l) surface based projection fail in (h,l) when the user stroke deviates from said surface, while mimicry remains reasonable (g, j). Compared to mimicry (m), anchored raycasting based on a local plane (purple n), or ray transport (blue o) can be discontinuous. d = 8; that is, we embed M in R 8 for computing the Phong projection. We use µ = 20cm, and compute the o set surfaces using libigl ]. We then improve the surface quality using TetWild [Hu et al. 2018 ], before computing the tetrahedral mesh T M between the two surfaces using TetGen [Si 2015 ]. We support fast closest-point queries, using an AABB tree implemented in geometry3Sharp [Schmidt 2017 ]. Signed-distance is also computed using the AABB tree and fast winding number [Barill et al. 2018] , and gradient ∇d M computed using central nite di erences. To ease replication of our various techniques and aid future work, we will open-source our implementation. We now formally compare our most promising projection mimicry, to the best state-of-the-art context-free projection spraycan. We designed a user study to compare the performance of the spraycan and mimicry methods for a variety of curve-drawing tasks. We selected six shapes for the experiment (Figure 10) , aiming to cover a diverse range of shape characteristics: sharp features (cube), large smooth regions (trebol, bunny), small details with ridges and valleys (bunny), thin features (hand), and topological holes (torus, fertility). We then sampled ten distinct curves on the surface of each of the six objects. A canonical task in our study involved the participant a empting to re-create a given target curve from this set. We designed two types of drawing tasks shown in Figure 11 : Tracing curves, where a participant tried to trace over a visible target curve using a single smooth stroke. Re-creating curves, where a participant a empted to re-create from memory, a visible target curve that was hidden as soon as the participant started to draw. An enumerated set of keypoints on the curve however, remained as a visual reference, to aid the participant in re-creating the hidden curve with a single smooth stroke. e rationale behind asking users to draw target curves is both to control the length, complexity, and nature of curves drawn by users, and to have an explicit representation of the user-intended curve. Curve tracing and re-creating are fundamentally di erent ( c ) Fig. 11 . The two tasks used in our study-curve tracing with the target curve visible when drawing (a), and curve re-creation where the target curve is initially visible (b) but is hidden as soon as the participant starts to draw (c). drawing tasks, each with important applications [Arora et al. 2017 ]. Our curve re-creation task is designed to capture free-form drawing, with minimal visual suggestion of intended target curve. We wanted to design target curves that could be executed using a single smooth motion. Since users typically draw sharp corners using multiple strokes [Bae et al. 2008 ], we constrain our target curves to be smooth, created using cardinal cubic B-splines on the meshes, computed using . We also control the length and curvature complexity of the curves, as pilot testing showed that very simple and short curves can be reasonably executed by almost any projection technique. Curve length and complexity is modeled by placing spline control points at mesh vertices, and specifying the desired geodesic distance and Gauß map distance between consecutive control points on the curve. We represent a target curve using four parameters n, i 0 , k G , k N , where n is the number of spline control points, i 0 the vertex index of the rst control point, and k G , k N constants that control the geodesic and normal map distance between consecutive control points. We de ne the desired geodesic distance between consecutive control points as, D G = k G × BBox(M) , where BBox(M) is the length of the bounding box diagonal of M. e desired Gauß map distance (angle between the unit vertex normals) between consecutive control points is simply k N . A target curve C 0 , . . . , C n−1 starting at vertex v i 0 of the mesh is generated incrementally for i > 0 as: where d G and d N compute the geodesic and normal distance between two points on M, and V ⊂ V contains only those vertices of M whose geodesic distance from C 0 , . . . , C i−1 is at least D G /2. e restricted subset of vertices conveniently helps prevent (but doesn't fully avoid) self-intersecting or nearly self-intersecting curves. Curves with complex self-intersections are less important practically, and can be particularly confusing for the curve re-creation task. All our target curve samples were generated using k G ∈ [0.05, 0.25], k N ∈ [π /6, 5π /12], n = 6, and a randomly chosen i 0 . e curves were manually inspected for self-intersections, and infringing curves rejected. We then de ned keypoints on the target curves as follows: curve endpoints were chosen as keypoints; followed by greedily picking extrema of geodesic curvature, while ensuring that the arclength distance between any two consecutive keypoints was at least 3cm; and concluding the procedure when the maximum arclength distance between any consecutive keypoints was below 15cm. Our target curves had between 4-9 keypoints (including endpoints). e main variable studied in the experiment was Projection methodspraycan vs. mimicry-realized as a within-subjects variable. e order of methods was counterbalanced between participants. For each method, participants were exposed to all the six objects. Object order was xed as torus, cube, trebol, bunny, hand, and fertility, based on our personal judgment of drawing di culty. e torus was used as a tutorial, where participants had access to additional instructions visible in the scene and their strokes were not utilized for analysis. For each object, the order of the 10 target strokes was randomized. e rst ve were used for the tracing curves task, while the remaining ve were used for re-creating curves. e target curve for the rst tracing task was repeated a er the ve unique curves, to gauge user consistency and learning e ects. A similar repetition was used for curve re-creation. Participants thus performed 12 curve drawing tasks per object, leading to a total of 12 × 5 (objects) × 2 (projections) = 120 strokes per participant. Owing to the COVID-19 physical distancing guidelines, the study was conducted in the wild, on participants' personal VR equipment at their homes. A 15-minute instruction video introduced the study tasks and the two projection methods. Participants then lled out a consent form and a questionnaire to collect demographic information. is was followed by them testing the rst projection method and lling out a questionnaire to express their subjective opinions of the method. ey then tested the second method, followed by a similar questionnaire, and questions involving subjective comparisons between the two methods. Participants were required to take a break a er testing the rst method, and were also encouraged to take breaks a er drawing on the rst three shapes for each method. e study took approximately an hour, including the questionnaires. Twenty participants (5 female) aged 21-47 from ve countries participated in the study. All but one were right-handed. Participants self-reported a diverse range of artistic abilities (min. 1, max. 5, median 3 on a 1-5 scale), and had varying degrees of VR experience, ranging from below 1 year to over 5 years. irteen participants had a technical computer graphics or HCI background, while ten had experience with creative tools in VR, with one reporting professional usage. Participants were paid ≈ 22 USD as a gi card. As the study was conducted on personal VR setups, a variety of commercial VR devices were utilized-Oculus Ri , Ri S, and est using Link cable, HTC Vive and Vive Pro, Valve Index, and Samsung Odyssey using Windows Mixed Reality. All but one participant used a standing setup allowing them to freely move around. Before each trial, participants could use the "grab" bu on on their controller (in the dominant hand) to grab the mesh to position and orient it as desired. e trial started as soon as the participant started to draw by pressing the "main trigger" on their dominant hand controller. is action disabled the grabbing interactionparticipants could not draw and move the object simultaneously. As noted earlier, for curve re-creation tasks, this had the additional e ect of hiding the target curve, but leaving keypoints visible. We recorded the head position h and orientation h, controller position c and orientation c, projected point q, and timestamp t, for each mid-air stroke point p = c. In general, we will refer to a task target curve by X, P S and P M as the mid-air strokes executed, and Q S and Q M , the corresponding curves created using spraycan and mimicry projection, respectively. We drop the superscript when the projection method used is not relevant, referring to a mid-air stroke as P and its projected curve as Q. We formulated three criteria to lter out meaningless user strokes: Short Curves: we ignore projected curves Q that are too short as compared to the length of the target curves X (conservatively curves less than half as long as the target curve). While it is possible that the user stopped drawing mid-way out of frustration, we found it was more likely that they prematurely released the controller trigger by accident. Both curve lengths are computed in R 3 for e ciency. Stroke Noise: we ignore strokes for which the mid-air stroke is too noisy. Speci cally, mid-air strokes with distant consecutive points (∃ i s.t. p i − p i−1 > 5cm) are rejected. Inverted Strokes: while we labelled keypoints with numbers and marked start and end points in green and red (Figure 11 ), some users occasionally drew the target curve in reverse. e motion to draw a curve in reverse is not symmetric, and such curves are thus rejected. We detect inverted strokes by look at the indices i 0 , i 1 , . . . , i l of the points in Q which are closest to the keypoints x k 0 , x k 1 , . . . , x k l of X. Ideally, the sequence i 0 , . . . , i l should have no inversions, i.e., ∀ 0 ≤ j < k ≤ l, i j ≤ i k ; and maximum l(l + 1)/2 inversions, if Q is aligned in reverse with X. We consider curves Q with more than l(l + 1)/4 (half the maximum) inversions, to be inadvertently inverted and reject them. We compute distances to the keypoints in R 3 for e ciency. Despite conducting our experiment remotely without supervision, we found that 95.6% of the strokes satis ed our criteria and could be utilized for analysis. For comparisons between π spr a and π mimicr , we reject curve pairs where either curve did not satisfy the quality criteria. Out of 1200 curve pairs (2400 total strokes), 1103 (91.9%) satis ed the quality criteria and were used for analysis, including 564 pairs for the curve re-creation task and 539 for the tracing task. We de ne 10 di erent statistical measures (Table 1) to compare π spr a and π mimicr curves in terms of their accuracy, aesthetic, Table 1 . antitative results (mean ± std-dev.) of the comparisons between mimicry and spraycan projection. All measures are analyzed using Wilcoxon signed-rank tests, lower values are be er, and significantly be er values (p < .05) are shown in boldface. Accuracy, aesthetic, and physical e ort measures are shown with green, red, and blue backgrounds, respectively. and e ort in curve creation. We consistently use the non-parametric Wilcoxon signed rank test for all quantitative measures instead of a parametric test such as the paired t-test, since the recorded data for none of our measures was normally distributed (normality hypothesis rejected via the Kolmogorov-Smirnov test, p < .005). 6.2.1 Curve Accuracy. Accuracy is computed using two measures of distance between points on the projected curve Q and target curve X. Both curves are densely re-sampled using m = 101 sample points equi-spaced by arc-length. Given Q = q 0 , . . . , q m−1 and X = x 0 , . . . , x m−1 , we compute the average equi-parameter distance D ep as where d E computes the Euclidean distance between two points in R 3 . We also compute the average symmetric distance D s m as In other words, D ep computes the distance between corresponding points on the two curves and D s m computes the average minimum distance from each point on one curve to the other curve. Mi mi c r y ( T r a c i n g ) S p r a y c a n ( T r a c i n g ) Mi mi c r y ( R e -c r e a t i n g ) S p r a y c a n ( R e -c r e a t i n g ) (c) Example strokes, orange points in (a, b) above. Fig. 12 . Curvature measures (a,b) indicate that mimicry produces significantly smoother and fairer curves than spraycan for both tracing (le ) and re-creating tasks (right). Pairwise comparison plots between mimicry (yaxis) and spraycan (x-axis), favour mimicry for the vast majority of points (points below the = x line). A linear regression fit (on the log plots) is shown as a dashed line. Example curve pairs (orange points) for curve tracing (le ) and re-creating (right) are also shown with the target curve X shown in gray (c). For both tracing and re-creation tasks, D ep indicated that mimicry produced signi cantly be er results than spraycan (see Table 1 , Figure 1c, 12) . e D s m di erence was not statistically signi cant, evidenced by users correcting their strokes to stay close to the intended target curve (at the expense of curve aesthetic). 6.2.2 Curve Aesthetic. For most design applications, jagged projected curves, even if geometrically quite accurate, are aesthetically undesirable [McCrae and . Curvature-based measures are typically used to measure fairness of curves. We report three such measures of curve aesthetic for the projected curve Q. We note that the smoothness quality of the user stroke P, was similar to Q and signi cantly poorer than the target curve X. is is expected since drawing in mid-air smoothly and with precision is di cult, and such strokes are usually neatened post-hoc [Arora et al. 2018] . We therefore avoid comparisons to the target curve and simply report three aesthetic measures for a projected curve Q = q 0 , . . . , q n−1 . We rst re ne Q by computing the exact geodesic on M between consecutive points of Q [Surazhsky et al. 2005] , to create Q with points q 0 , . . . , q k−1 , k ≥ n. We choose to normalize our curvature measures using L X , the length of the corresponding target stroke X. e normalized Euclidean curvature for Q is de ned as where θ i is the angle between the two segments of Q incident on q i . us, K E is the total discrete curvature of Q, normalized by the target curve length. Since Q is embedded in M, we can also compute discrete geodesic curvature, computed as the deviation from the straightest geodesic for a curve on surface. Using a signed θ i de ned at each point q i via Polthier and Schmies's de nition [2006] , we compute normalized geodesic curvature as Finally, we de ne fairness [Arora et al. 2017; McCrae and Singh 2008 ] as a rst-order variation in geodesic curvature, thus de ning the normalized fairness de ciency as For all three measures, a lower value indicates a smoother, pleasing, curve. Wilcoxon signed-rank tests on all three measures indicated that mimicry produced signi cantly smoother and be er curves than spraycan (Table 1) . antitatively, we use the amount of head (HMD) and hand (controller) movement, and stroke execution time τ , as proxies for physical e ort. For head and hand translation, we rst lter the position data with a Gaussian-weighted moving average lter with σ = 20ms. We then de ne normalized head/controller translation T h and T c as the length of the poly-line de ned by the ltered head/controller positions normalized by the length of the target curve L X . An important ergonomic measure is the amount of head/hand rotation required to draw the mid-air stroke. We rst de-noise or lter the forward and up vectors of the head/controller frame, using the same lter as for positional data. We then re-orthogonalize the frames and compute the length of the curve de ned by the ltered orientations in SO(3), using the angle between consecutive orientation data-points. We de ne normalized head/controller rotation R h and R c as its orientation curve length, normalized by L X . Table 1 summarizes the physical e ort measures. We observe lower controller translation (e ect size ≈ 5%) and execution time (e ect size ≈ 12%) in favour of spraycan; lower head translation and Hi g h l y p r e f e r mi mi c r y S o me wh a t p r e f e r mi mi c r y Ne u t r a l S o me wh a t p r e f e r s p r a y c a n Hi g h l y p r e f e r s p r a y c a n S p r a y c a n Mi mi c r y orientation (e ect sizes ≈ 36%, 26%) in favour of mimicry. Noteworthy, is the signi cantly reduced controller rotation using mimicry, with spraycan unsurprisingly requiring 35% (tracing) and 44% (recreating) more hand rotation from the user. antifying Users' Tendency to Mimic. e study also provided an opportunity to test if the users actually tended to mimic their intended curve X in the mid-air stroke P. To quantify the "mimcriness" of a stroke, we subsample P and X into m points as in § 6.2.1, use the correspondence as in Eq. 7 and look at the variation in the distance (distance between the closest pair of corresponding points subtracted from that of the farthest pair) as a percentage of the target length L X . We call this measure the mimicry violation of a stroke. Intuitively, the lower the mimicry violation, the closer the stroke P is to being a perfect mimicry of X, going to zero if it is a precise translation of X. Notably, users depicted very similar trends to mimic for both the techniques-with 86% (mimicry), 80% (spraycan) strokes exhibiting mimicry violation below 25% of L X , and 71%, 66% below 20% of L X -suggesting that mimicry is indeed a natural tendency. Recall that users repeated 2 of the 10 strokes per shape for both the techniques. To analyze consistency across the repeated strokes, we compared the values of the stroke accuracy measure D eq and the aesthetic measure F between the original stroke and the corresponding repeated stroke. Speci cally, we measured the relative change | f (i) − f (i )|/f (i), where (i, i ) is a pair of original and repeated strokes, and f (·) is either D eq or F . Users were fairly consistent across both the techniques, with the average consistency for D eq being 35.4% for mimicry and 36.8% for spraycan, while for F , it was 36.5% and 34.1%, respectively. Note that the averages were computed a er removing extreme outliers outside the 5σ threshold. alitative Analysis e mid-and post-study questionnaires elicited qualitative responses from participants on their perceived di culty of drawing, curve accuracy and smoothness, mental and physical e ort, understanding of the projection methods, and overall method of preference. Participants rated their perceived di culty in drawing on the 6 study objects (Figure 13 ), validating our ordering of shapes in the experiment based on expected drawing di culty. Accuracy, smoothness, physical/mental e ort responses were collected via 5-point Likert scales. We consistently order the choices from 1 (worst) to 5 (best) in terms of user experience in Figure 14 , and reported median (M) scores here. Mimicry was perceived to be a more accurate projection method (tracing, re-creating M = 3, 3.5) compared to spraycan (M = 2, 2), with 9 participants perceiving their traced curves to be either very accurate or somewhat accurate with mimicry (compared to 2 for spraycan) (Figure 14a ). User perception of stroke smoothness was also consistent with quantitative results, with mimicry (tracing, re-creating M = 4, 4) clearly outperforming spraycan (tracing, re-creating M = 1, 2) ( Figure 14b ). Lastly, with no need for controller rotation, mimicry (M = 3) was perceived as less physically demanding than spraycan (M = 2), as expected ( Figure 14c ). e response to understanding and mental e ort was more complex. Spraycan, with its physical analogy and mathematically precise de nition was clearly understood by all 20 participants (17 very well, 3 somewhat) (Figure 15a ). Mimicry, conveyed as "drawing a mid-air stroke on or near the object as similar in shape as possible to the intended projection", was less clear to users (7 very well, 11 somewhat, 3 not at all). Despite not understanding the method, the 3 participants were able to create curves that were both accurate and smooth. Further, users perceived mimicry (M = 2.5) as less Fig. 16 . Gallery of free-form curves in red, drawn using mimicry. From le to right, tracing geometric features on the bunny, smooth maze-like curves on the cube, maze-like curve with sharp corners and a spiral on the trebol, and artistic ta oo motifs on the hand. Some mid-air strokes (black) are hidden for clarity. cognitively taxing than spraycan (M = 2) (Figure 14c ). We believe this may be because users were less prone to consciously controlling their stroke direction and rather focused on drawing. e tendency to mimic may have thus manifested sub-consciously, as we had observed in pilot testing. e most important qualitative question was user preference (Figure 15b ). 85% of the 20 participants preferred mimicry (10 highly preferred, 7 somewhat preferred). e remaining users were neutral (1/20) or somewhat preferred spraycan (2/20). We also asked participants to elaborate on their stated preferences and ratings. Participants (P4,8,16,17) noted discontinuous "jumps" caused by spraycan, and felt the continuity guarantee of mimicry: "seemed to deal with the types of ji er and inaccuracy VR setups are prone to be er" (P6) ; "could stabilize my drawing" (P9) . P9,15 felt that mimicry projection was smoothing their strokes (no smoothing was employed): we believe this may be the e ect of noise and inadvertent controller rotation, which mimicry ignores, but can cause large variations with spraycan, perceived as curve smoothing. Some participants (P4,17) felt that rotating the hand smoothly while drawing was di cult, while others missed the spraycan ability to simply use hand rotation to sweep out long projected curves from a distance (P2,7). Participants commented on physical e ort: "Mimicry method seemed to required [sic] much less head movement, hand rotation and mental planning" (P4) . Participants appreciated the anchored control of mimicry in highcurvature regions (P1,2,4,8) also noting that with spraycan, "the curvature of the surface could completely mess up my stroke" (P1) . Some participants did feel that spraycan could be preferable when drawing on near-at regions of the mesh (P3,14,19,20) . Finally, participants who preferred spraycan felt that mimicry required more thinking: "with mimicry, there was extra mental e ort needed to predict where the line would go on each movement" (P3) , or because mimicry felt "unintuitive" (P7) due to their prior experience using a spraycan technique. Some who preferred mimicry found it di cult to use initially, but felt it got easier over the course of the experiment (P4,17). Complex 3D curves on arbitrary surfaces can be drawn in AR/VR with a single stroke, using mimicry ( Figure 16 ). Drawing such curves on 3D virtual objects is fundamental to many applications, including direct painting of textures [Schmidt et al. 2006 ]; tangent vector eld design [Fisher et al. 2007 ]; texture synthesis [Lefebvre and Hoppe 2006; Turk 2001] ; interactive selection, annotation, and object segmentation [Chen et al. 2009 ]; and seams for shape parametrization [Lévy et al. 2002; Rabinovich et al. 2017; Sawhney and Crane 2017] , registration [Gehre et al. 2018] , and quad meshing [Tong et al. 2006 ]. We showcase the utility and quality of mimicry curves within example applications (also see supplemental video). Texture Painting: Figures 1e, 17 show examples of textures painted in VR using mimicry. e long, smooth, wraparound curves on the torus, are especially hard to draw with 2D interfaces. Our implementation uses Discrete Exponential Maps (DEM) [Schmidt et al. 2006 ] to compute a dynamic local parametrization around each projected point q i , to create brush strokes or geometric stamps on the object. Mesh Segmentation: Figures 1e and 18 show mimicry used for interactive segmentation in VR. In our implementation users draw an almost-closed curve Q = {q 0 , . . . , q n−1 } on the object using mimcry. We snap points q i to their nearest mesh vertex, and use Dijkstra's shortest path to connect consecutive vertices, and to close the cycle of vertices. A mesh region is selected or segmented using mesh faces partitioned by these cycles that are easy to draw in AR/VR, but o en require view changes and multiple strokes in 2D. Vector Field Design: Vector elds on meshes are commonly used for texture synthesis [Turk 2001 ], guiding uid simulations [Stam 2003 ], and non-photorealistic rendering [Hertzmann and Zorin 2000] . We use mimicry curves as so constraints to guide the vector eld generation of Fisher et al. [2007] . Figure 19 shows example vector elds, visualized using Line Integral Convolutions [Cabral and Leedom 1993] in the texture domain. We have presented a detailed investigation of the problem of realtime inked drawing on 3D virtual objects in immersive environments. We show the importance of stroke context when projecting mid-air 3D strokes, and explore the design space of anchored projections. A 20-participant remote study showed mimicry to be preferred over the established spraycan projection for projecting mid-air strokes on 3D objects in AR/VR. Both mimicry projection and performing VR studies in the wild do have some limitations. Further, while user stroke processing for 2D interfaces is a mature eld of research, mid-air stroke processing for AR/VR is relatively nascent, with many directions for future work. "In the wild" VR Study Limitations. Ongoing pandemic restrictions presented both a challenge and an opportunity to remotely conduct a more natural study in the wild, with a variety of consumer VR hardware and setups. e enthusiasm of the VR community allowed us to readily recruit 20 diligent users, albeit with a bias towards young, adult males. While the variation in VR headsets seemed to be of li le consequence, there was a notable di erence in shape and size of the 3D controllers. Controller grip and weight can certainly impact mid-air drawing posture and stroke behavior. Controller size is also signi cant: a larger Vive controller for example, has a higher chance of occluding target objects and projected curve, as compared to a smaller Oculus Touch controller. We could have mitigated the impact of controller size by rendering a standard drawing tool in VR, but we preferred to remain application agnostic, rendering the familiar, default controller that matched the physical controller in participants' hands. Further, no participants explicitly mentioned the controller ge ing in the way of their ability to draw. Overall, our study contributes a high-quality VR data corpus comprising ≈ 2400 user strokes, projected curves, intended target curves, and corresponding VR system states. is data can serve as a benchmark for future work in mid-air stroke projection, and data-driven learning techniques for mid-air stroke processing. Mimicry Limitations. Our lack of a concise mathematical denition of observed stroke mimicry, makes it harder to precisely communicate it to users. While a precise mathematical formulation may exist, conveying it to non-technical users can still be a challenging task. Mimicry ignores controller orientation, producing smoother strokes with less e ort, but can give participants a reduced sense of sketch control (P2,3,6). We hypothesize that the reduced sense of control is in part due to the tendency for anchored smooth-closest-point to shorten the user stroke upon projection, sometimes creating a feeling of lag. Spraycan like techniques in contrast, have a sense of ampli ed immediacy, and the explicit ability to make lagging curves catch-up by rotating a controller in place. Future work. Our goal was to develop a general real-time inked projection with minimal stroke context via anchoring. Optimizing the method to account for the entire partially projected stroke may improve the projection quality. Relaxing the restriction of real-time inking would allow techniques such as spline ing and global optimization that can account for the entire user stroke and geometric features of the target object. Local parametrizations such as DEM ( § 7) can be used to incrementally grow or shrink the projected curve, so it does not lag the user stroke. Hybrid projections leveraging both proximity and raycasting are also subject to future work. On the interactive side, we did experiment with feedback to encourage users to draw closer to a 3D object. For example, we tried varying the appearance of the line connecting the controller to the projected point based on line length; or providing aural/haptic feedback if the controller got further than a certain distance from the object. While these techniques can help users in speci c drawing or tracing tasks, we found them to be distracting and harmful to stroke quality for general stroke projection. Bimanual interaction in VR, such as rotating the shape with one hand while drawing on it with the other (suggested by P3,19), can also be explored. Perhaps the most exciting area of future work is employing datadriven techniques to infer the user-intended projection, perhaps customized to the drawing style of individual users. Our study code and data will be made publicly available to aid in such endeavours. In summary, this paper presents early research on processing and projection of mid-air strokes drawn on and around 3D objects, that we hope will inspire further work and applications in AR/VR. We are thankful to Michelle Lei for developing the initial implementation of the context-free techniques, and to Jiannan Li and Debanjana Kundu for helping pilot our methods. We also thank various 3D model creators and repositories for the models we utilized: Stanford bunny model courtesy of the Stanford 3D Scanning Repository, trebol model provided by Shao et al.[2012] , fertility model courtesy the Aim@Shape repository, hand model provided by Jeffo89 on turbosquid.com, and cup model (Figure 2 ) provided by Daniel Noree on thingiverse.com under a CC BY 4.0 license. Substance Painter Association for Computing Machinery Experimental Evaluation of Sketching on Surfaces in VR SymbiosisSketch: Combining 2D & 3D Sketching for Designing Detailed 3D Objects in Situ ILoveSketch: as-naturalas-possible sketching system for creating 3d curve models Fast Winding Numbers for Soups and Clouds Imaging Vector Fields Using Line Integral Convolution 3-Sweep: Extracting Editable Objects from a Single Photo A Benchmark for 3D Mesh Segmentation Cords: Geometric Curve Primitives for Modeling Contact Multidimensional scaling. In Handbook of data visualization SecondSkin: Sketch-Based Construction of Layered 3D Models HoloSketch: a virtual reality sketching/animation tool Modeling by Drawing with Shadow Guidance A Sketch-Based Interface for Collaborative Design Design of Tangent Vector Fields Sketching Hairstyles iWIRES: An Analyzeand-Edit Approach to Shape Manipulation Interactive Curve Constrained Functional Maps Creating Principal 3D Curves with Digital Tape Drawing Sketch-Based Editing Tools for Tumour Segmentation in 3D Medical Images Illustrating Smooth Surfaces Tetrahedral Meshing in the Wild Teddy: A Sketching Interface for 3D Freeform Design Li -o : Using Reference Imagery and Freehand Sketching to Create 3D Models in VR libigl: A simple C++ geometry processing library Annotating and Sketching on 3D Web Models WYSIWYG NPR: Drawing Strokes Directly on 3D Models 3D Haptic Modeling System using Ungrounded Pen-shaped Kinesthetic Display Sketch-Based 3D-Shape Creation for Industrial Styling Design Drawing on Air: Input Techniques for Controlled 3D Line Illustration CavePainting: A Fully Immersive 3D Artistic Medium and Interactive Experience Fi ing Smooth Surfaces to Dense Polygon Meshes Skippy: Single View 3D Curve Interactive Modeling Mobi3DSketch: 3D Sketching in Mobile AR Appearance-Space Texture Synthesis Least Squares Conformal Maps for Automatic Texture Atlas Generation Multiplanes: Assisted Freehand VR Sketching 2019. e E ect of Spatial Ability on Immersive 3D Drawing Sketching Piecewise Clothoid Curves Slices: A Shape-Proxy Based on Planar Sections FlatFitFab: Interactive Modeling with Planar Sections iCu er: A Direct Cut-out Tool for 3D Shapes FiberMesh: Designing Freeform Surfaces with 3D Curves Sketch-based Modeling: A survey Direct Drawing on 3D Shapes with Automated Camera Control Insitu: Sketching Architectural Designs in Context Weighted Averages on Surfaces Straightest Geodesics on Polyhedral Surfaces Scalable Locally Injective Mappings Boundary First Fla ening Surface Drawing: Creating Organic 3D Shapes with the Hand and Tangible Tools OverCoat: An Implicit Canvas for 3D Painting Open-Source (Boost-license) C# Library for Geometric Computing Interactive Decal Compositing with Discrete Exponential Maps Analytic Drawing of 3D Sca olds Meshmixer: An Interface for Rapid Mesh Composition CrossShade: Shading Concept Sketches Using Cross-section Curves TetGen, a Delaunay-Based ality Tetrahedral Mesh Generator Wires: A Geometric Deformation Technique Gravity Sketch. h ps://www.gravitysketch.com/ Olga Sorkine and Daniel Cohen-Or Flows on Surfaces of Arbitrary Topology Sculpting Multi-dimensional Nested Structures Fast Exact and Approximate Geodesics on Meshes Sketch-based Generation and Editing of ad Meshes Elasticurves: Exploiting Stroke Dynamics and Inertia for the Real-Time Neatening of Sketched 2D Curves Designing adrangulations with Discrete Harmonic Forms Texture Synthesis on Surfaces A Sketch-Based Interface for Clothing Virtual Characters FreeDrawer: A Free-form Sketching System on the Responsive Workbench Investigating the Learnability of Immersive Free-Hand Sketching HairBrush for Immersive Data-Driven Hair Modeling True2Form: 3D Curve Networks from 2D Sketches via Selective Regularization