Submitted 2 March 2017 Accepted 8 June 2017 Published 3 July 2017 Corresponding author Ken Arroyo Ohori, g.a.k.arroyoohori@tudelft.nl Academic editor Sándor Szénási Additional Information and Declarations can be found on page 16 DOI 10.7717/peerj-cs.123 Copyright 2017 Arroyo Ohori et al. Distributed under Creative Commons CC-BY 4.0 OPEN ACCESS Visualising higher-dimensional space- time and space-scale objects as projections to R3 Ken Arroyo Ohori, Hugo Ledoux and Jantien Stoter 3D Geoinformation, Delft University of Technology, Delft, Netherlands ABSTRACT Objects of more than three dimensions can be used to model geographic phenomena that occur in space, time and scale. For instance, a single 4D object can be used to represent the changes in a 3D object’s shape across time or all its optimal representations at various levels of detail. In this paper, we look at how such higher-dimensional space- time and space-scale objects can be visualised as projections from R4 to R3. We present three projections that we believe are particularly intuitive for this purpose: (i) a simple ‘long axis’ projection that puts 3D objects side by side; (ii) the well-known orthographic and perspective projections; and (iii) a projection to a 3-sphere (S3) followed by a stereographic projection to R3, which results in an inwards-outwards fourth axis. Our focus is in using these projections from R4 to R3, but they are formulated from Rn to Rn−1 so as to be easily extensible and to incorporate other non-spatial characteristics. We present a prototype interactive visualiser that applies these projections from 4D to 3D in real-time using the programmable pipeline and compute shaders of the Metal graphics API. Subjects Graphics, Scientific Computing and Simulation, Spatial and Geographic Information Systems Keywords Projections, Space-time, Space-scale, 4D visualisation, Nd gis BACKGROUND Projecting the 3D nature of the world down to two dimensions is one of the most common problems at the juncture of geographic information and computer graphics, whether as the map projections in both paper and digital maps (Snyder, 1987; Grafarend & You, 2014) or as part of an interactive visualisation of a 3D city model on a computer screen (Foley & Nielson, 1992; Shreiner et al., 2013). However, geographic information is not inherently limited to objects of three dimensions. Non-spatial characteristics such as time (Hägerstrand, 1970; Güting et al., 2000; Hornsby & Egenhofer, 2002; Kraak, 2003) and scale (Meijers, 2011a) are often conceived and modelled as additional dimensions, and objects of three or more dimensions can be used to model objects in 2D or 3D space that also have changing geometries along these non-spatial characteristics (Van Oosterom & Stoter, 2010; Arroyo Ohori, 2016). For example, a single 4D object can be used to represent the changes in a 3D object’s shape across time (Arroyo Ohori, Ledoux & Stoter, 2017) or How to cite this article Arroyo Ohori et al. (2017), Visualising higher-dimensional space-time and space-scale objects as projections to R3. PeerJ Comput. Sci. 3:e123; DOI 10.7717/peerj-cs.123 https://peerj.com mailto:g.a.k.arroyoohori@tudelft.nl https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/10.7717/peerj-cs.123 http://creativecommons.org/licenses/by/4.0/ http://creativecommons.org/licenses/by/4.0/ http://dx.doi.org/10.7717/peerj-cs.123 (A) (B) (C) (D) Figure 1 A 4D model of a house at two levels of detail and all the equivalences its composing elements is a polychoron bounded by: (A) volumes representing the house at the two levels of detail, (B) a pyra- midal volume representing the window at the higher LOD collapsing to a vertex at the lower LOD, (C) a pyramidal volume representing the door at the higher LOD collapsing to a vertex at the lower LOD, and a roof volume bounded by (A) the roof faces of the two LODs, (B) the ridges at the lower LOD col- lapsing to the tip at the higher LOD and (C) the hips at the higher LOD collapsing to the vertex below them at the lower LOD. (D) A 3D cross-section of the model obtained at the middle point along the LOD axis. all the best representations of a 3D object at various levels of detail (Luebke et al., 2003; Van Oosterom & Meijers, 2014; Arroyo Ohori et al., 2015a; Arroyo Ohori, Ledoux & Stoter, 2015c). Objects of more than three dimensions can be however unintuitive (Noll, 1967; Frank, 2014), and visualising them is a challenge. While some operations on a higher-dimensional object can be achieved by running automated methods (e.g. certain validation tests or area/volume computations) or by visualising only a chosen 2D or 3D subset (e.g. some of its bounding faces or a cross-section), sometimes there is no substitute for being able to view a complete nD object—much like viewing floor or façade plans is often no substitute for interactively viewing the complete 3D model of a building. By viewing a complete model, one can see at once the 3D objects embedded in the model at every point in time or scale as well as the equivalences and topological relationships between their constituting elements. More directly, it also makes it possible to get an intuitive understanding of the complexity of a given 4D model. For instance, in Fig. 1 we show an example of a 4D model representing a house at two different levels of detail and all the equivalences its composing elements. It forms a valid manifold 4-cell (Arroyo Ohori, Damiand & Ledoux, 2014), allowing it to be represented using data structures such as a 4D generalised or combinatorial map. This paper thus looks at a key aspect that allows higher-dimensional objects to be visualised interactively, namely how to project higher-dimensional objects down to fewer dimensions. Arroyo Ohori et al. (2017), PeerJ Comput. Sci., DOI 10.7717/peerj-cs.123 2/22 https://peerj.com http://dx.doi.org/10.7717/peerj-cs.123 While there is previous research on the visualisation of higher-dimensional objects, we aim to do so in a manner that is reasonably intuitive, implementable and fast. We therefore discuss some relevant practical concerns, such as how to also display edges and vertices and how to use compute shaders to achieve good framerates in practice. In order to do this, we first briefly review the most well-known transformations (translation, rotation and scale) and the cross-product in nD, which we use as fundamental operations in order to project objects and to move around the viewer in an nD scene. Afterwards, we show how to apply three different projections from Rn to Rn−1 and argue why we believe they are intuitive enough for real-world use. These can be used to project objects from R4 to R3, and if necessary, they can be used iteratively in order to bring objects of any dimension down to 3D or 2D. We thus present: (i) a simple ‘long axis’ projection that stretches objects along one custom axis while preserving all other coordinates, resulting in 3D objects that are presented side by side; (ii) the orthographic and perspective projections, which are analogous to those used from 3D to 2D; and (iii) an inwards/outwards projection to an (n−1)-sphere followed by an stereographic projection to Rn−1, which results in a new inwards-outwards axis. We present a prototype that applies these projections from 4D to 3D and then applies a standard perspective projection down to 2D. We also show that with the help of low-level graphics APIs, all the required operations can be applied at interactive framerates for the 4D to 3D case. We finish with a discussion of the advantages and disadvantages of this approach. Higher-dimensional modelling of space, time and scale There are a great number of models of geographic information, but most consider space, time and scale separately. For instance, space can be modelled using primitive instancing (Foley et al., 1995; Kada, 2007), constructive solid geometry (Requicha & Voelcker, 1977) or various boundary representation approaches (Muller & Preparata, 1978; Guibas & Stolfi, 1985; Lienhardt, 1994), among others. Time can be modelled on the basis of snapshots (Armstrong, 1988; Hamre, Mughal & Jacob, 1997), space–time composites (Peucker & Chrisman, 1975; Chrisman, 1983), events (Worboys, 1992; Peuquet, 1994; Peuquet & Duan, 1995), or a combination of all of these (Abiteboul & Hull, 1987; Worboys, Hearnshaw & Maguire, 1990; Worboys, 1994; Wachowicz & Healy, 1994). Scale is usually modelled based on independent datasets at each scale (Buttenfield & DeLotto, 1989; Friis-Christensen & Jensen, 2003; Meijers, 2011b), although approaches to combine them into single datasets (Gröger et al., 2012) or to create progressive and continuous representations also exist (Ballard, 1981; Jones & Abraham, 1986; Günther, 1988; Van Oosterom, 1990; Filho et al., 1995; Rigaux & Scholl, 1995; Plümer & Gröger, 1997; Van Oosterom, 2005). As an alternative to the all these methods, it is possible to represent any number of parametrisable characteristics (e.g. two or three spatial dimensions, time and scale) as additional dimensions in a geometric sense, modelling them as orthogonal axes such that real-world 0D–3D entities are modelled as higher-dimensional objects embedded in higher- dimensional space. These objects can be consequently stored using higher-dimensional Arroyo Ohori et al. (2017), PeerJ Comput. Sci., DOI 10.7717/peerj-cs.123 3/22 https://peerj.com http://dx.doi.org/10.7717/peerj-cs.123 1A coordinate system based on projective geometry and typically used in computer graphics. An additional coordinate indicates a scale factor that is applied to all other coordinates. data structures and representation schemes Čomić & de Floriani (2012); Arroyo Ohori, Ledoux & Stoter (2015b). Possible approaches include incidence graphs (Rossignac & O’Connor, 1989; Masuda, 1993; Sohanpanah, 1989; Hansen & Christensen, 1993), Nef polyhedra Bieri & Nef (1988), and ordered topological models Brisson (1993); Lienhardt (1994). This is consistent with the basic tenets of n-dimensional geometry (Descartes, 1637; Riemann, 1868) and topology (Poincaré, 1895), which means that it is possible to apply a wide variety of computational geometry and topology methods to these objects. In a practical sense, 4D topological relationships between 4D objects provide insights that 3D topological relationships cannot (Arroyo Ohori, Boguslawski & Ledoux, 2013). Also, McKenzie, Williamson & Hazelton (2001) contends that weather and groundwater phenomena cannot be adequately studied in less than four dimensions, and Van Oosterom & Stoter (2010) argue that the integration of space, time and scale into a 5D model for GIS can be used to ease data maintenance and improve consistency, as algorithms could detect if the 5D representation of an object is self-consistent and does not conflict with other objects. Basic transformations and the cross-product in nD The basic transformations (translation, scale and rotation) have a straightforward definition in n dimensions, which can be used to move and zoom around a scene composed of nD objects. In addition, the n-dimensional cross-product can be used to obtain a new vector that is orthogonal to a set of other n−1 vectors in Rn. We use these operations as a base for nD visualisation and are thus described briefly below. The translation of a set of points in Rn can be easily expressed as a sum with a vector t = [t0,...,tn], or alternatively as a multiplication with a matrix using homogeneous coordinates1 in an (n+1)×(n+1) matrix, which is defined as: T =   1 0 ··· 0 t0 0 1 ··· 0 t1 ... ... ... ... ... 0 0 ··· 1 tn 0 0 ··· 0 1  . Scaling is similarly simple. Given a vector s= [s0,s1,...,sn] that defines a scale factor per axis (which in the simplest case can be the same for all axes), it is possible to define a matrix to scale an object as: S=   s0 0 ··· 0 0 s1 ··· 0 ... ... ... ... 0 0 ··· sn  . Rotation is somewhat more complex. Rotations in 3D are often conceptualised intuitively as rotations around the x, y and z axes. However, this view of the matter is only valid in 3D. In higher dimensions, it is necessary to consider instead rotations parallel to a given plane (Hollasch, 1991), such that a point that is continuously rotated (without Arroyo Ohori et al. (2017), PeerJ Comput. Sci., DOI 10.7717/peerj-cs.123 4/22 https://peerj.com http://dx.doi.org/10.7717/peerj-cs.123 changing the rotation direction) will form a circle that is parallel to that plane. This view is valid in 2D (where there is only one such plane), in 3D (where a plane is orthogonal to the usually defined axis of rotation) and in any higher dimension. Incidentally, this shows that the degree of rotational freedom in nD is given by the number of possible combinations of two axes (which define a plane) on that dimension (Hanson, 1994), i.e. ( n 2 ) . Thus, in a 4D coordinate system defined by the axes x, y, z and w, it is possible to define six 4D rotation matrices, which correspond to the six rotational degrees of freedom in 4D (Hanson, 1994). These respectively rotate points in R4 parallel to the xy, xz, xw, yz, yw and zw planes: Rxy =   cos θ −sin θ 0 0 sin θ cos θ 0 0 0 0 1 0 0 0 0 1   Rxz =   cos θ 0 −sin θ 0 0 1 0 0 sin θ 0 cos θ 0 0 0 0 1   Rxw =   cos θ 0 0 −sin θ 0 1 0 0 0 0 1 0 sin θ 0 0 cos θ   Ryz =   1 0 0 0 0 cos θ −sin θ 0 0 sin θ cos θ 0 0 0 0 1   Ryw =   1 0 0 0 0 cos θ 0 −sin θ 0 0 1 0 0 sin θ 0 cos θ   Rzw =   1 0 0 0 0 1 0 0 0 0 cos θ −sin θ 0 0 sin θ cos θ  . The n-dimensional cross-product is easy to understand by first considering the lower- dimensional cases. In 2D, it is possible to obtain a normal vector to a 1D line as defined by two (different) points p0 and p1, or equivalently a normal vector to a vector from p0 to p1. In 3D, it is possible to obtain a normal vector to a 2D plane as defined by three (non-collinear) points p0, p1 and p2, or equivalently a normal vector to a pair of vectors from p0 to p1 and from p0 to p2. Similarly, in nD it is possible to obtain a normal vector to a (n−1)D subspace—probably easier to picture as an (n−1)-simplex—as defined by n linearly independent points p0,p1,...,pn−1, or equivalently a normal vector to a set of n−1 vectors from p0 to every other point (i.e., p1,p2,...,pn−1) (Massey, 1983; Elduque, 2004). Hanson (1994) follows the latter explanation using a set of n−1 vectors all starting from the first point to give an intuitive definition of the n-dimensional cross-product. Assuming that a point pi in Rn is defined by a tuple of coordinates denoted as (pi0,p i 1,...,p i n−1) and a unit vector along the ith dimension is denoted as x̂i, the n-dimensional cross-product EN of a set of points p0,p1,...,pn−1 can be expressed compactly as the cofactors of the last column in the following determinant: EN = ∣∣∣∣∣∣∣∣∣∣ (p10−p 0 0) (p 2 0−p 0 0) ··· (p n−1 0 ) x̂0 (p11−p 0 1) (p 2 1−p 0 1) ··· (p n−1 1 ) x̂1 ... ... ... ... ... (p1n−1−p 0 n−1) (p 2 n−1−p 0 n−1) ··· (p n−1 n−1) x̂n−1. ∣∣∣∣∣∣∣∣∣∣ Arroyo Ohori et al. (2017), PeerJ Comput. Sci., DOI 10.7717/peerj-cs.123 5/22 https://peerj.com http://dx.doi.org/10.7717/peerj-cs.123 The components of the normal vector EN are thus given by the minors of the unit vectors x̂0,x̂1,...,x̂n−1. This vector EN–like all other vectors—can be normalised into a unit vector by dividing it by its norm ( EN ) . Previous work on the visualisation of higher-dimensional objects There is a reasonably extensive body of work on the visualisation of 4D and nD objects, although it is still more often used for its creative possibilities (e.g., making nice-looking graphics) than for practical applications. In literature, visual metaphors of 4D space were already described in the 1880 sin Flatland: A Romance of Many Dimensions (Abbott, 1884) and A New Era of Thought (Hinton, 1888). Other books that treat the topic intuitively include Beyond the Third Dimension: Geometry, Computer Graphics, and Higher Dimensions (Banchoff, 1996) and The Visual Guide To Extra Dimensions: Visualizing The Fourth Dimension, Higher-Dimensional Polytopes, And Curved Hypersurfaces (McMullen, 2008). In a more concrete computer graphics context, already in the 1960s, Noll (1967) described a computer implementations of the 4D to 3D perspective projection and its application in art (Noll, 1968). Beshers & Feiner (1988) describe a system that displays animating (i.e. continuously transformed) 4D objects that are rendered in real-time and use colour intensity to provide a visual cue for the 4D depth. It is extended to n dimensions by Feiner & Beshers (1990). Banks (1992) describes a system that manipulates surfaces in 4D space. It describes interaction techniques and methods to deal with intersections, transparency and the silhouettes of every surface. Hanson & Cross (1993) describes a high-speed method to render surfaces in 4D space with shading using a 4D light and occlusion, while Hanson (1994) describes much of the mathematics that are necessary for nD visualisation. A more practical implementation is described in Hanson, Ishkov & Ma (1999). Chu et al. (2009) describe a system to visualise 2-manifolds and 3-manifolds embedded in 4D space and illuminated by 4D light sources. Notably, it uses a custom rendering pipeline that projects tetrahedra in 4D to volumetric images in 3D—analogous to how triangles in 3D that are usually projected to 2D images. A different possible approach lies in using meaningful 3D cross-sections of a 4D dataset. For instance, Kageyama (2016) describes how to visualise 4D objects as a set of hyperplane slices. Bhaniramka, Wenger & Crawfis (2000) describe how to compute isosurfaces in dimensions higher than three using an algorithm similar to marching cubes. D’Zmura, Colantoni & Seyranian (2000) describe a system that displays 3D cross-sections of a 4D virtual world one at a time. Similar to the methods described above, Hollasch (1991) gives a simple formulation to describe the 4D to 3D projections, which is itself based on the 3D to 2D orthographic and perspective projection methods described by Foley & Nielson (1992). This is the method that we extend to define n-dimensional versions of these projections and is thus explained in greater detail below. The mathematical notation is however changed slightly so as to have a cleaner extension to higher dimensions. Arroyo Ohori et al. (2017), PeerJ Comput. Sci., DOI 10.7717/peerj-cs.123 6/22 https://peerj.com http://dx.doi.org/10.7717/peerj-cs.123 In order to apply the required transformations, Hollasch (1991) first defines a point from∈R4 where the viewer (or camera) is located, a point to∈R4 that the viewer directly points towards, and a set of two vectors −→ up and−−→over. Based on these variables, he defines a set of four unit vectors â, b̂, ĉ and d̂ that define the axes of a 4D coordinate system centred at the from point. These are ensured to be orthogonal by using the 4D cross-product to compute them, such that: d̂ = to−from ‖to−from‖ â= up×over×d̂ ‖up×over×d̂‖ b̂= over×d̂×â ‖over×d̂×â‖ ĉ = d̂×â×b̂. Note two aspects in the equations above: (i) that the input vectors −→ up and −−→over are left unchanged (i.e., b̂= −→ up and ĉ =−−→over) if they are already orthogonal to each other and orthogonal to the vector from from to to (i.e., to−from), and (ii) that the last vector ĉ does not need to be normalised since the cross-product already returns a unit vector. These new unit vectors can then be used to define a transformation matrix to transform the 4D coordinates into a new set of points E (as in eye coordinates) with a coordinate system with the viewer at its centre and oriented according to the unit vectors. The points are given by: E = [ P−from ][ â b̂ ĉ d̂ ] . For an orthographic projection given E =[e0 e1e2 e3], the first three columns e0, e1 and e2 can be used as-is, while the fourth column e3 defines the orthogonal distance to the viewer (i.e., the depth). Finally, in order to obtain a perspective projection, he scales the points inwards in direct proportion to their depth. Starting from E, he computes E′=[e′0 e ′ 1e ′ 2 e ′ 3] as: e′0= e0 e3 tanϑ/2 e′1= e1 e3 tanϑ/2 e′2= e2 e3 tanϑ/2 e′3=e3. Where ϑ is the viewing angle between x and the line between the from point and every point as shown in Fig. 2. A similar computation is done for y and z. In E′, the first three columns (i.e., e′0, e ′ 1 and e ′ 2) similarly give the 3D coordinates for a perspective projection of the 4D points while the fourth column is also the depth of the point. Arroyo Ohori et al. (2017), PeerJ Comput. Sci., DOI 10.7717/peerj-cs.123 7/22 https://peerj.com http://dx.doi.org/10.7717/peerj-cs.123 Figure 2 The geometry of a 4D perspective projection along the x axis for a point p. By analysing the depth along the depth axis given by e3, it is possible to see that the coordinates of the point along the x axis, given by e0, are scaled inwards in order to obtain e′0 based on the viewing angle ϑ. Note that x̂n−1 is an arbitrary viewing hyperplane and another value can be used just as well. METHODOLOGY We present here three different projections from Rn to Rn−1 which can be applied iteratively to bring objects of any dimension down to 3D for display. We three projections that are reasonably intuitive in 4D to 3D: a ‘long axis’ projection that puts 3D objects side by side, the orthographic and perspective projections that work in the same way as their 3D to 2D ana- logues, and a projection to an (n−1)-sphere followed by a stereographic projection to Rn−1. ‘Long axis’ projection First we aim to replicate the idea behind the example previously shown in Fig. 1—a series of 3D objects that are shown next to each other, seemingly projected separately with the correspondences across scale or time shown as long edges (as in Fig. 1) or faces connecting the 3D objects. Edges would join correspondences between vertices across the models, while faces would join correspondences between elements of dimension up to one (e.g. a pair of edges, or an edge and a vertex). Since every 3D object is apparently projected separately using a perspective projection to 2D, it is thus shown in the same intuitive way in which a single 3D object is projected down to 2D. The result of this projection is shown in Fig. 3 for the model previously shown in Figs. 1 and in 4 for a 4D model using 3D space with time. Although to the best of our knowledge this projection does not have a well-known name, it is widely used in explanations of 4D and nD geometry—especially when drawn by hand or when the intent is to focus on the connectivity between different elements. For instance, it is usually used in the typical explanation for how to construct a tesseract, i.e., a 4-cube or the 4D analogue of a 2D square or 3D cube, which is based on drawing two cubes and connecting the corresponding vertices between the two (Fig. 5). Among other examples in the scientific literature, this kind of projection can be seen in Fig. 2 in Yau & Srihari (1983), Fig. 3.4 in Hollasch (1991), Fig. 3 in Banchoff & Cervone (1992), Figs. 1–4 in Arenas & Pérez-Aguila (2006), Fig. 6 in Grasset-Simon, Damiand & Lienhardt (2006), Fig. 1 in Paul (2012) and Fig. 16 in Van Oosterom & Meijers (2014). Conceptually, describing this projection from n to n−1 dimensions, which we hereafter refer to as a ‘long axis’ projection, is very simple. Considering a set of points P in Rn, the projected set of points P′ in Rn−1 is given by taking the coordinates of P for the Arroyo Ohori et al. (2017), PeerJ Comput. Sci., DOI 10.7717/peerj-cs.123 8/22 https://peerj.com http://dx.doi.org/10.7717/peerj-cs.123 (A) (B) (C) Figure 3 A model of a 4D house similar to the example shown previously in Fig. 1, here including also a window and a door that are collapsed to a vertex in the 3D object at the lower level of detail. (A) shows the two 3D objects positioned as in Fig. 1, (B) rotates these models 90◦ so that the front of the house is on the right, and (C) orients the two 3D objects front to back. Many more interesting views are possible, but these show the correspondences particularly clearly. Unlike the other model, this one was generated with 4D coordinates and projected using our prototype that applies the projection described in this sec- tion. (A) (B) Figure 4 We take (A) a simple 3D model of two buildings connected by an elevated corridor, and model it in 4D such that the two buildings exist during a time interval [−1,1]and the corridor only exists during [−0.67,0.67], resulting in (B) a 4D model shown here in a ‘long axis’ projection. The two buildings are shown in blue and green for clarity. Note how this model shows more saturated colours due to the higher number of faces that overlap in it. Figure 5 The typical explanation for how to draw the vertices and edges in an i-cube. Starting from a single vertex representing a point (i.e. a 0-cube), an (i+1)-cube can be created by drawing two i-cubes and connecting the corresponding vertices of the two. Image by Wikimedia user NerdBoy1392 (retrieved from https://commons.wikimedia.org/wiki/File:Dimension_levels.svg under a CC BY-SA 3.0 license). Arroyo Ohori et al. (2017), PeerJ Comput. Sci., DOI 10.7717/peerj-cs.123 9/22 https://peerj.com https://commons.wikimedia.org/wiki/File:Dimension_levels.svg http://dx.doi.org/10.7717/peerj-cs.123 (D) (E) (F) (A) (B) (C) Figure 6 (A–C) The 4D house model and (D–F) the two buildings model projected down to 3D using an orthographic projection. The different views are obtained by applying different rotations in 4D. The less and more detailed 3D models can be found by looking at where the door and window are collapsed. first n−1 axes and adding to them the last coordinate of P which is spread over all coordinates according to weights specified in a customisable vector x̂n. For instance, Fig. 3 uses x̂n =[2 0 0], resulting in 3D models that are 2 units displaced for every unit in which they are apart along the n-th axis. In matrix form, this kind of projection can then be applied as P′=P[I x̂n]. Orthographic and perspective projections Another reasonably intuitive pair of projections are the orthographic and perspective projections from nD to (n−1)D. These treat all axes similarly and thus make it more difficult to see the different (n−1)-dimensional models along the n-th axis, but they result in models that are much less deformed. Also, as shown in the 4D example in Fig. 6, it is easy to rotate models in such a way that the corresponding features are easily seen. Based on the description of 4D-to-3D orthographic and perspective projection described from Hollasch (1991), we here extend the method in order to describe the n-dimensional to ( n−1)-dimensional case, changing some aspects to give a clearer geometric meaning for each vector. Arroyo Ohori et al. (2017), PeerJ Comput. Sci., DOI 10.7717/peerj-cs.123 10/22 https://peerj.com http://dx.doi.org/10.7717/peerj-cs.123 2Visual cues can still be useful in higher dimensions. See http://eusebeia.dyndns. org/4d/vis/08-hsr. Similarly, we start with a point from∈Rn where the viewer is located, a point to∈Rn that the viewer directly points towards (which can be easily set to the centre or centroid of the dataset), and a set of n−2 initial vectors−→v 1,..., −→v n−2 in Rn that are not all necessarily orthogonal but nevertheless are linearly independent from each other and from the vector to− from. In this setup, the −→v i vectors serve as a base to define the orientation of the system, much like the traditional −→ up vector that is used in 3D to 2D projections and the −−→over vector described previously. From the above mentioned variables and using the nD cross-product, it is possible to define a new set of orthogonal unit vectors x̂0,...,x̂n−1 that define the axes x0,...,xn−1 of a coordinate system in Rn as: x̂n−1= to−from ‖to−from‖ x̂0= −→v 1×···× −→v n−2×x̂n−1 ‖ −→v 1×···× −→v n−2×x̂n−1‖ x̂i= −→v i+1×···× −→v n−2×x̂n−1×x̂0×···×x̂i−1 ‖ −→v i+1×···× −→v n−2×x̂n−1×x̂0×···×x̂i−1‖ x̂n−2= x̂n−1×x̂0×···×x̂n−2. The vector x̂n−1 is the first that needs to be computed and is oriented along the line from the viewer (from) to the point that it is oriented towards (to). Afterwards, the vectors are computed in order from x̂0 to x̂n−2 as normalised n-dimensional cross products of n−1 vectors. These contain a mixture of the input vectors −→v 1,..., −→v n−2 and the computed unit vectors x̂0,...,x̂n−1, starting from n−2 input vectors and one unit vector for x̂0, and removing one input vector and adding the previously computed unit vector for the next x̂i vector. Note that if−→v 1,..., −→v n−2 and x̂n−1 are all orthogonal to each other,∀0< i