key: cord-0907886-ao4u3w5k authors: Chow, Jason K.; Palmeri, Thomas J.; Gauthier, Isabel title: Haptic object recognition based on shape relates to visual object recognition ability date: 2021-08-05 journal: Psychol Res DOI: 10.1007/s00426-021-01560-z sha: f331156450f513b8265801ad4298f9080c5b1764 doc_id: 907886 cord_uid: ao4u3w5k Visual object recognition depends in large part on a domain-general ability (Richler et al. Psychol Rev 126(2): 226–251, 2019). Given evidence pointing towards shared mechanisms for object perception across vision and touch, we ask whether individual differences in haptic and visual object recognition are related. We use existing validated visual tests to estimate visual object recognition ability and relate it to performance on two novel tests of haptic object recognition ability (n = 66). One test includes complex objects that participants chose to explore with a hand grasp. The other test uses a simpler stimulus set that participants chose to explore with just their fingertips. Only performance on the haptic test with complex stimuli correlated with visual object recognition ability, suggesting a shared source of variance across task structures, stimuli, and modalities. A follow-up study using a visual version of the haptic test with simple stimuli shows a correlation with the original visual tests, suggesting that the limited complexity of the stimuli did not limit correlation with visual object recognition ability. Instead, we propose that the manner of exploration may be a critical factor in whether a haptic test relates to visual object recognition ability. Our results suggest a perceptual ability that spans at least across vision and touch, however, it may not be recruited during just fingertip exploration. People vary more than they realize in their ability to recognize objects visually (Gauthier, 2018) . Visual tests of object recognition ability with good psychometric properties are a recent development (Dennett et al., 2012; McGugin et al., 2012; Richler et al., 2017 Richler et al., , 2019 . The availability of these tests made it possible to use a latent variable approach, which uses common variance across tasks to measure psychological constructs. A large portion of the variance shared across several tasks and for different object categories was captured by a single higher-order latent variable (Richler et al., 2019; Sunday et al., 2021) . This provided evidence for a domain-general object recognition ability, o, which is not strongly correlated with general intelligence. Whether this ability is strictly visual or extends to other modalities is unknown. Here, we offer the first tests designed to measure individual differences in haptic object recognition and relate these to individual differences in visual object recognition. Our first goal is to ask if there are reliable individual differences in object recognition for a modality other than vision. These are the first examples of what should eventually be a large collection of different haptic tests with different object categories and tasks to facilitate the exploration of the hierarchical structure in individual differences using methods from areas such as personality or intelligence. Until a sufficient number of reliable tests of haptic object recognition are available to assess the existence of a latent haptic object recognition factor, we can pursue simpler questions. Therefore, our second goal is to ask whether visual object recognition ability (o v ) is correlated with individual differences on single reliable haptic object recognition tests. Haptic perception relies on a combination of cutaneous sensory inputs from receptors under the skin and kinesthetic sensory inputs from mechanoreceptors in joints, tendons, and muscles to extract geometric and material properties such as surface texture and object shape (Lederman & Klatzky, 2009) . While humans mainly recognize objects visually, we can at times rely almost exclusively on touch: perhaps to find something in a bag while our eyes are otherwise occupied. Even though less is known about object recognition in the haptic modality than in vision, evidence suggests overlapping mechanisms between the two modalities. Fast and accurate object recognition is possible with haptic information (Klatzky & Lederman, 1995; Klatzky et al., 1985) . Similarity ratings and derived perceptual spaces of objects across modalities are highly correlated (Cooke et al., 2007; Gaissert et al., 2010) . Both when categorized visually and haptically, object pairs within a category have a lower distance in perceptual space than object pairs from different categories (Gaissert & Wallraven, 2012) . Upwards of 150 high-fidelity object representations can be stored in long-term memory for both modalities (Brady et al., 2008; Hutmacher & Kuhbandner, 2018) . Visual and haptic object recognition are similarly viewpoint-dependent (Edelman & Bülthoff, 1992; Newell et al., 2001) . In addition to similarities in behavior, the processing of haptic and visual information recruits common extrastriate regions, suggesting multisensory representations (Amedi et al., 2002; Sathian et al., 2011; Snow et al., 2013) . Despite such similarities, other evidence reveals differences between visual and haptic perception. Extraction of features in haptic perception uses hand movements that are typically serial (Lederman & Klatzky, 1987) , while at least the initial encoding of visual features is parallel (e.g., Buetti et al., 2016) . Haptic object perception weights shape and texture features equally while vision favors shape over texture (Cooke et al., 2007) . Objects are most efficiently recognized by hand exploration with their primary axis parallel or orthogonal to the body (Woods et al., 2008) , while visual object recognition is often best with the objects' primary axis rotated 45° relative to the viewer (Palmer et al., 1981) . Although unimodal object identification and recognition performance are similar in both modalities using the same objects, cross-modal performance is asymmetrical, with better performance when visual stimuli are encoded first (Desmarais et al., 2017; Lacey & Campbell, 2006) . Neural activation patterns for touch and visual imagery are similar for familiar objects but less so for unfamiliar shapes (Lacey et al., 2010 (Lacey et al., , 2014 . Thus, while visual and haptic object recognition may to some extent rely on common mechanisms, there are important differences in the information that is encoded or the way it is acquired between the modalities. Individual differences are a growing source of information about visual object recognition, offering new insights into the functional organization of high-level vision and its relation to other cognitive skills (Gauthier, 2018; Richler et al., 2019; Wilmer, 2008) . But there has been no systematic study of haptic object recognition abilities, let alone their correlation with visual object recognition abilities. To address this challenge, we need tests of haptic object recognition ability with sufficient reliability-a measurement property necessary for the study of individual differences. While most readers will be familiar with test-retest reliability, in which participants' scores correlate across different sessions, a simpler form of reliability is internal consistency, based on the correlations between different items (or trials) within the same test. Hedge et al. (2018) measured this reliability on several classic cognitive psychology tasks. Even when the average expected effects were robust, these tasks often lacked sufficient reliability to measure consistent individual differences. Many traditional experimental tasks are designed to limit subject-level variability to more effectively measure group-level effects. In doing so, however, they sacrifice the ability to consistently rank-order participants in the performance. In other words, traditional experimental tasks often lack the reliability to measure individual differences. Here, we designed the first test of haptic recognition abilities with novel 3D objects and assess whether performance on that test correlates with o v , estimated by the shared variance between two different visual tests with two different object categories. We also control for general intelligence estimated with the Raven's Matrices. Additionally, we created another haptic test of object recognition that encourages fingertip exploration of small buttons to investigate the generalization to other kinds of haptic tasks and stimuli. We established the reliability of our haptic measurements. To preview the results: the recognition of complex novel objects using hand grasping does not correlate with recognition using fingertip exploration but correlates with o v, even after controlling for general intelligence, suggesting that o could generalize to at least some haptic tasks. A follow-up study shows that the visual version of the buttons test correlates with o v, suggesting that fingertip exploration may rely on a haptic ability that does not tap into the object representations that support o v . Participants completed five tests in a fixed order to avoid order effects in our measured individual differences (Goodhew & Edwards, 2019): haptic Novel Object Memory Test-buttons, haptic Matching Test-Spaceships, visual Novel Object Memory Test-Ziggerins, visual Matching Test-Sheinbugs, and Raven's Progressive Matrices. All tests were presented using MATLAB with Psychtoolbox 3 (Kleiner et al., 2007) . Seventy-three young adults from Vanderbilt University participated for course credit. To ensure we would gather meaningful evidence for or against significant differences, we employed a Bayesian stopping rule for data collection, initially collecting data from 50 participants and adding participants until critical Bayes factors reached the threshold for substantial evidence, BF +0 > 3 or BF +0 < 1/3 (Jeffreys, 1961) . To reduce the possibility that correlations may be inflated by participants with low motivation across tasks or failing to comply with procedures, those who scored below chance on any object recognition test and also had reaction times at least one standard deviation away in either direction from the mean in that test (though most excluded participants were two standard deviations away) were excluded from the final analysis. This procedure resulted in a final sample size of 66 (mean age = 18.97 years, SD = 1.04 years; 48 female). Fifty-two participants were right-handed as measured by Edinburgh Handedness Inventory (Oldfield, 1971) . The entire study was completed in approximately 90 min. Informed consent was obtained, and procedures were approved by the Vanderbilt University Institutional Review Board. The two visual tests chosen to estimate o v differ both in the task demands and the object category used, such that an aggregate (Rushton et al., 1983) should tap into a domaingeneral ability rather than idiosyncratic variability specific to the constraints of each test. Prior structural equation modeling (Richler et al., 2019; Sunday et al., 2021) with a large number of similar visual tests found strong evidence for a common factor, even though the correlations between different tests using different object categories are in the 0.2 to 0.4 range. These correlations are attenuated due to measurement error but are also limited because they exclude construct-irrelevant variance related to task or category. This test required participants to quickly determine if two objects (from a set of 50 Sheinbugs-see Fig. 1a ) presented serially were the same or different (Richler et al., 2019; Sunday et al., 2021) . The test began with six practice trials followed by 360 test trials. On each trial, a fixation cross was presented for 500 ms, followed by the presentation of the first Sheinbug, followed by a visual mask of scrambled Sheinbug parts for 500 ms, followed by a second Sheinbug remaining on the screen for up to 3000 ms until a response of either same (using the G key) or different (using the H key). Timed-out trials were considered incorrect. Participants were instructed to respond as quickly and accurately as possible. The first image was presented either for 300 ms (in the first 180 trials) or 150 ms (in the latter half) to vary the difficulty. The first and second images of the Sheinbugs could differ in size, brightness, or viewpoint. Participants were offered breaks every 90 trials. The test was scored based on the sensitivity (d`), with a chance level at 0. This test required participants to learn exemplars and later recognize them amongst distractors (Richler et al., 2017; Sunday et al., 2021) . The test began with the presentation of the six target exemplars at once. Participants could study the six novel objects (Ziggerins-see Fig. 1b ) for as long as they needed before beginning test trials. Test trials presented one of the six target Ziggerins alongside two distractors. Participants were instructed to respond with the key (F, G, or H) corresponding to the relative position of the target Ziggerin on the screen. After 24 trials, participants were told that the remaining trials would present Ziggerins in new orientations and to ignore these orientation differences. Participants were then presented with the six target Ziggerins for review. Another 24 trials were then presented with rotated target Ziggerins. Percent correct over 48 trials was used to index performance on the test, with a chance level at 33% accuracy. Haptic tests were developed in an iterative manner in pilot studies. As there were no previously developed haptic tests of object recognition ability, we developed new haptic tests based on similar visual tests (e.g., Richler et al., 2019) . Tests were modified through several iterations to maximize reliability (based on internal consistency and item-total correlations) and to include a range of item difficulties. Haptic testing takes longer than visual testing and while more trials lead to higher reliability, we compromised by selecting trials for each test to be completed within approximately 25 min. Haptic tests were conducted with the participant sitting beside an experimenter, each looking at separate screens, divided by a curtain that remained in place throughout the experiment. Participants practiced reaching for and exploring practice objects on the side of the experimenter behind the curtain to familiarize themselves with the experiment setup. First, participants practiced reaching around the curtain to briefly grasp three practice objects at three designated positions (these positions remained the same throughout the experiment). Afterwards, participants practiced rapidly grasping an object, briefly exploring it, and quickly returning their dominant hand back to the keyboard in front of them on cue. None of the haptic objects used in the experiment were ever visible to the participants. Objects were entirely managed by the experimenter as instructed by the experimenter's screen. Each object was fastened to the table using Velcro on the three designated positions, approximately 12cms apart. Outside of this initial practice phase, the experimenter did not communicate with the participant except for brief clarifications as necessary. All instructions were presented on the participant's screen. The experiment setup was laid out for each participant based on handedness (so they sat to the right or left of the experimenter), such that objects could be easily reached by participants' dominant hand. Participants were instructed to use only their dominant hand for the haptic tests including for keyboard responses, avoiding any concerns of non-dominant hand acuity or bimanual coordination variability (Treffner & Turvey, 1996; Vines et al., 2008) . This test required participants to quickly determine if two sequentially presented objects were the same or different. This test used a set of 27 3D-printed spaceships 1 from a stimulus space defined by three morphable dimensions (Fig. 1c ). Spaceships were designed to be palm-sized, approximately 8 cm in its longest dimension. Trials were originally designed to achieve a range of difficulties based on similarity along the three dimensions. The final test was created after two rounds of pilot data collection and adjustments. On each iteration, we collected data from an extended version of the test and replaced trials with low item-total correlations. For the final test presented here, we used the Spearman-Brown prophecy formula (Brown, 1910; Spearman, 1910) to predict the number of trials necessary to achieve a reliability of 0.8 based on the last iteration. We thus selected 62 trials based on the following constraints: trial difficulty based on the last iteration should be roughly equal across bins of 0.1 and within this constraint, we used the trials with the highest item-total correlations. In the final test, not all spaceships are used equally often, our focus is on achieving good reliability alongside a range of difficulties. 2 Spaceships were mounted on wooden bases. Most trials presented spaceships with a consistent "viewpoint" orthogonal to the body; some trials presented a spaceship rotated 180 degrees during the presentation to further increase difficulty. On each trial, participants held the spacebar for at least 500 ms to begin the trial. Once the participant released the spacebar, they were instructed to reach behind the curtain with the same hand that had held the spacebar to explore a single spaceship for 4000 ms 3 after which the participant was prompted to return that hand to the spacebar as quickly as possible by a tone and instruction on screen. If the participant failed to return their hand to the spacebar within 1000 ms of the prompt, they were instructed to perform the procedure faster. Afterwards, exploration was once again initiated by holding down the spacebar for at least 500 ms. When the participant released the spacebar, the participant was prompted to explore the spaceship with that hand and determine whether this spaceship and the previously presented spaceship were the same or different regardless of rotation within 8000 ms. Timed-out trials were considered incorrect. Participants responded with their dominant hand using keys marked for same or different on the left and right side of the keyboard, respectively. The next trial began as soon as the participant responded. They were instructed to respond as quickly and accurately as possible. While participants were not instructed on how to explore the spaceships, informal observation by the experimenters revealed that participants almost invariably used a circular precision grasp, involving only distal and intermediate phalanges (Cini et al., 2019) . There was not much time for additional exploration, but participants sometimes pinched or rubbed their fingers on individual parts. One practice trial provided feedback to the participant and repeated if they responded incorrectly, were too slow to respond or returned to the spacebar too slowly during exposure. This was followed by the 62 test trials, roughly ordered by difficulty based on the last round of pilot data. Half of the trials had matching spaceships while the other half had different spaceships. Participants were offered a break after 31 trials. Sensitivity (d`) was used to index performance on this test, with a chance level at 0. This test required participants to remember six target buttons and perform a three-alternate forced-choice task selecting for one of the targets. The structure of the test is based on a visual version of the NOMT by Richler and colleagues (2017) . The fingertip-sized buttons were mounted onto index cards (Fig. 2a) and presented to participants in the designated position closest to them. In the practice phase, participants were first presented with a single practice object to familiarize themselves with the study procedure (Fig. 2b) . At the beginning of a study trial, participants were instructed to hold down the spacebar for at least 500 ms to begin the trial. Once the participant released the spacebar, they were instructed to reach behind the curtain with that same hand and explore the object presented at the center of an index card in the designated location. Participants explored the object for 8000 ms. Afterwards, the participant was prompted to return their hand to the spacebar as quickly as possible by the instruction screen and a tone. If the participant failed to return to the spacebar within 1000 ms, they were instructed to perform the procedure faster. Three test trials with arrays of three objects followed each study trial. Participants initiated each test trial by holding down the spacebar for at least 500 ms. Upon spacebar release, participants were instructed to reach behind the curtain and freely explore an array of three objects. The participant responded with the F, G, or H key on the keyboard indicating the relative position of the target object to terminate the trial. They were instructed to respond as quickly and accurately as possible. There was no time limit for test trials and accuracy feedback was given only during the practice phase. In the learning phase, participants studied six target buttons relevant through the entire test. Each target button was introduced to the participant with a study trial followed by three test trials using that target button with easy distractors with obvious differences to all target buttons. No feedback was given during the learning phase test trials. After the six target buttons had been studied and tested, participants were given a review period where each target button was presented individually. During the review, there was no time limit on exploring a button, but each target button was only reviewed once in the order of the initial presentation. Again, participants were not instructed on how to explore the buttons, but informal observation by the experimenters revealed that participants almost invariably explored them by rubbing and moving their fingertips on the items that were fixed on the cardboard. They could not be picked up or be fully enclosed with a grasp. In the test phase, participants performed 41 test trials. Each trial presented an array with a target button and two distractors. Participants did not know which target button would be used on each trial. No feedback was given during the test phase and test trial timings mirrored test trials in the study phase. The trials were developed in pilot studies where distractors were chosen for each target to create a range of difficulty across trials while maximizing reliability. The same procedures used in the development of hMatch-Spaceships across two iterations were used to create the final test: the 18 easiest trials (3 for each target button) were chosen for practice and 41 other trials were selected as test trials to maximize the distribution of difficulty and item-total correlations. For this purpose, the six target buttons were Fig. 2 The haptic Novel Object Memory Test with button stimuli. a Example stimuli on a single trial mounted on an index card as presented to participants. b Schematic of the learning phase. Each target is studied once, followed by three test trials. The test phase only includes test trials with the targets interleaved not used in equal proportion, because our focus is chiefly on reliability and a range of difficulty. In the final test, the trials were roughly ordered in increasing difficulty based on pilot data. Following the procedures of the visual version of this test, the test trials in the learning phase and the test phase were used in scoring this test, totaling to 59 trials, with a chance level of 33% accuracy. Scoring the test with only the test trials does not drastically change results. We used a computerized version of the RAPM to estimate fluid intelligence (Raven, 2000) . On each trial, a 3 × 3 matrix of images with the bottom right image removed was presented. Participants were tasked to select the correct image missing from the matrix among a set of eight alternatives to complete the pattern. A total of 18 trials were ordered from easiest to hardest and participants were given 10 min to complete as many trials as possible (with no time limit on individual trials). Total correct was used to index performance on the test. We used a Bayesian framework to perform our correlation analysis with a Jeffreys-Zellner-Siow prior (Wetzels & Wagenmakers, 2012) . Bayesian hypothesis testing encourages the specification of competing models so that we could test which one is better supported by data. In this study, the two competing models were that of a shared mechanism between visual and haptic abilities (a positive correlation; H 1 ) compared to independence (no correlation; H 0 ). While a lack of correlation does not necessarily mean completely independent mechanisms, we specifically point to independence as a parsimonious model for a potentially complex relation across modalities of perception. We report BF +0 , which provides the relative likelihood of a directional hypothesis H + over H 0 . This directional Bayesian test does not suffer from the same limitations as a directional frequentist test. With the latter, researchers are insensitive to situations where data support a correlation in the non-predicted direction (in that case the decision is non-significant). The biggest difference between a directional and non-directional Bayesian test (BF +0 vs. BF 10 ) is that when the effect is in the opposite direction, the evidence becomes much less favorable to H 1 and much more favorable towards H 0 (Wagenmakers et al., 2016) . When the effect goes in the predicted direction, BF +0 may be larger than BF 10 by no more than a factor of 2, and when the effect is 0, the two BFs are identical. The use of BF +0 is, therefore, more sensitive to the difference between the two competing models than a non-directional BF 10 . We used 95% highest posterior densities as credible intervals (95% CI) to index the uncertainty of our point estimates. These credible intervals have a straightforward interpretation in that the true value of the point estimate has a 95% probability of being within the interval. Descriptive statistics for performance and reliability (internal consistency) in each test are reported in Table 1 . Average sensitivity (d` = 0.36) for the vMatch-Sheinbugs was lower than in studies where that same test was preceded with a vNOMT with the same object category (d` of 0.82, Richler et al., 2019; or 0.70 in Sunday et al., 2021) . 4 This, however, does not seem to adversely affect individual differences, since the correlation between the vNOMT and the vMatching test (r = 0.29, 95% CI [0.05, 0.52], BF +0 = 3.30) is similar to that in prior work (r = 0.35 in Richler et al., 2019; r = 0.34 in Sunday et al., 2021) . Scores for the two visual tests were z-scored and averaged to estimate o v. (Wang & Stanley, 1970) . RAPM has a maximum score of 18 Zero-order correlations are reported in Table 2 . Between hMatch-Spaceships and o v , we found substantial evidence for a positive correlation relative to no correlation. The best estimate of the true effect size, after disattenuation given the reliabilities of the tests (Spearman, 1907) , is r = 0.52. Interestingly, we also found substantial evidence against a positive correlation between o v and hNOMTbuttons and between the two haptic tests. A symmetrical test comparing evidence for any correlation against the null also provided evidence in favor of no correlation between hNOMT-buttons and o v (BF 01 = 7.26) as with hMatch-Spaceships (BF 01 = 7.61). Evidence against a positive correlation between the two haptic tests remained even after controlling for RAPM scores, r = 0.09, 95% CI [− 0.15, 0.33], BF +0 = 0.31 (Fig. 3a) . The partial correlation between hMatch-Spaceships and o v , controlling for RAPM scores, was r = 0.35, 95% CI [0.12, 0.57], BF +0 = 24.68 (Fig. 3b) while that between hNOMT-buttons and o v is r = 0.12, 95% CI [− 0.11, 0.36], BF +0 = 0.41 (Fig. 3c) . We found that performance on the hNOMT-buttons is not related to the shared variance between our visual tests (o v ). Still controlling for RAPM performance, we explored whether this test might correlate with each of the individual visual tests or the other haptic test (Table 3) . We found no evidence supporting a positive correlation between hNOMTbuttons and each of the individual visual tests. Notably, the visual and haptic versions of the NOMT share task demands and structures and yet do not correlate any more than vMatch-Sheinbug and hNOMT-buttons. While we found that hMatch-Spaceships is related to o v , hNOMT-buttons do not seem to tap into this domaingeneral ability, nor into task-specific variance from individual visual tests and not even into the haptic ability measured by the hMatch-Spaceships. Note that if anything, we observed lower reliability in the hMatch-Spaceships than in the hNOMT-buttons, so we are not concerned that the pattern of correlations is due to a limitation in reliability. There are three main differences between the two haptic tests that may account for the difference in how hMatch-Spaceships and hNOMT-buttons relate to our other tests. The first is the task format: a matching task vs. a learning task. We do not believe this accounts for the difference given extensive evidence in large visual studies that different tasks can tap into a common ability (Richler et al, 2019; Sunday et al., 2021) . Since we estimated o v based on an aggregate of visual learning and matching tests, if the task was critical, there should still be a small but consistent correlation driven by the learning task procedures. Indeed, when looking at the correlation between the two learning tests across modalities (hNOMT and vNOMT), the magnitude of the correlation is similar to the correlation within the haptic modality (hNOMT and hMatch; Table 3 ). A second difference is the manner of exploration of these objects. Participants mostly chose to explore spaceships using a circular precision grasp involving distal and middle phalanges, with fingers free to wrap around the objects. In contrast, the nearly-flat buttons affixed to cardboard did not allow such a grasp and participants chose to explore them with their fingertips. This is a plausible explanation for the difference in the abilities recruited by the two tests based on prior work showing distinct exploratory patterns and recognition performance between restricted flattened stimuli and freely manipulated stimuli (Cashdan, 1968; Lederman & Klatzky, 1987) . However, there is a third difference that we wanted to eliminate as a potential explanation 5 : buttons were simpler than the spaceships, which are more comparable in their complexity to the Sheinbug and Ziggerin objects that we used to estimate o v . To test whether this factor limited the correlation between the buttons test and o v , in "Study 2" we created a visual version of the hNOMT-buttons test. If performance in a visual NOMT-buttons correlates with o v, it would provide some evidence against the idea that hNOMT-buttons did not correlate with o v primarily because the stimuli were simpler. Participants completed three online tests in a fixed order: visual NOMT-buttons, vNOMT-Ziggerins, and vMatch-Sheinbugs. vNOMT-Ziggerins was modified such that participants would click the objects for responses instead of responding with a keyboard. vMatch-Sheinbugs was modified such that participants responded same/different using on-screen buttons and we randomly removed a fifth of the trials to save time and avoid fatigue. We did not collect data for general intelligence because we only found weak inconclusive correlations with all other tests and previous results showed that the correlation between visual tests do not depend on it. Eighty-seven adults participated in the experiment from Amazon Mechanical Turk. Participants were all from the United States with over 95% approval rate on the website. To avoid inflation of correlations due to low motivation or failure to comply with procedures, we excluded 7 participants for poor performance and outlier reaction time on any test as in study 1 resulting in a final sample size of 80 participants (45 females; mean age = 42.36 years, SD = 13.00 years). The entire experiment was completed in approximately 45 min. Informed consent was obtained, and procedures were approved by the Vanderbilt University Institutional Review Board. Each haptic NOMT-buttons trial was photographed and converted to grayscale to be presented online. Some trials had buttons that were rotated in the picture plane to reduce the diagnosticity of specularity cues; otherwise, the trials were directly converted from the haptic version. Trial order was preserved from the haptic version of this test. The procedures were essentially the same as the haptic version. First, participants were instructed to remember single buttons so their memory could be tested against distractors later. It was noted that while rotation and position were not diagnostic, size was diagnostic for object identity. The task began with a practice phase with a single exposure of a button for 8000 ms followed by three test trials. On each test trial, participants were asked which button has been studied. Participants responded by clicking on the buttons directly. There was no time limit. After the practice phase, a target button was presented singly for 8000 ms followed by three test trials where the target was the recently studied button. This was repeated until all six target buttons had been presented for the learning phase. Afterwards, each button was presented singly for 4000 ms each as a review before a test phase of 41 test trials were presented where any of the six target buttons could appear. No feedback was given for any test trial. The 59 test trials in the learning phase and the test phase were used for scoring this test with a chance level of 33% accuracy. Descriptive statistics and reliability for each test are reported in Table 4 . Performance on vMatch-Sheinbug was comparable to in-lab results even with the shortened format suggesting fatigue was not the cause for lower performance in experiment 1. vNOMT-Ziggerins and vMatch-Sheinbugs were correlated (r = 0.30, 95% CI [0.09, 0.51], BF +0 = 7.93); their scores were z scored and averaged to estimate o v. We found moderate evidence for a positive correlation between o v and vNOMT-buttons (r = 0.33, 95% CI [0.12, 0.53], BF +0 = 7.93). Accounting for the reliability of our measures, the disattenuated correlation was r = 0.42. This suggests that a visual test using the same buttons (and trials) as the hNOMT-buttons task taps into o v. The relative simplicity of the buttons does not appear to be why the haptic test with Spaceships but not that with buttons was related to o v in "Study 1". We set out to create tests of object recognition ability in haptic perception and ask whether haptic object recognition ability relates to visual object recognition ability (o v ). We developed two new tests of haptic object recognition ability with acceptable reliability: hMatch-Spaceships require the matching of 3D novel objects that can be grasped with the hand, and hNOMT-buttons is a memory test requiring the recognition of relatively flat objects studied with the fingertips. We found evidence supporting a positive correlation between hMatch-Spaceships and our estimates of visual object recognition ability, o v , even after controlling for general intelligence. Across recent studies, o has been described as a domain-general ability relevant to the visual recognition of familiar and novel objects (Richler et al., 2017; Sunday et al., 2021) . This domain-general ability correlates with performance in visual tasks as diverse as the recognition of musical notation (Chang & Gauthier, 2021) or the detection of tumors in chest x-rays (Sunday et al., 2018) . Our results are the first to show o v correlates with individual differences in a non-visual task, thereby suggesting that it may reflect mechanisms common to visual and haptic recognition. The correlation between hMatch-Spaceships and o v may be explained by their reliance on shape features. The Spaceships used in the haptic Matching test and the Sheinbugs and Ziggerins in the visual tests can be best discriminated from other objects with the same category by their shape, given that surface properties like texture or color do not vary within a category. Our results point towards a perceptual ability that relies on shape perception mechanisms common to vision and touch. This is consistent with behavioral and neural evidence of overlapping visual representations for haptic and visual object recognition (Amedi et al., 2002; Gaissert et al., 2010; Sathian et al., 2011; Snow et al., 2013) . Shape perception may be a significant component of o v as shape is a salient feature in defining object identity (Landau & Leyton, 1999; Rosch et al., 1976) and is more salient than texture features in visual similarity ratings (Cooke et al., 2007) . Therefore, the shape-reliant hMatch-Spaceships may be tapping into shape perception mechanisms that are especially helpful in visual object recognition, driving the correlation between it and o v . Because our results cannot establish the existence of a latent construct centered on haptic object recognition, it is difficult to specify the nature of the overlap. It is possible that there are no separate visual and haptic abilities and it is a common factor o that points equally to visual and haptic skills, but it is also possible that evidence warrants separate visual and haptic factors, which could be correlated to some extent. Interestingly, the effect size of the relationship between hMatch-Spaceships and o v was similar to those typically observed between pairs of visual tests with different stimuli and task requirements. Whether this is the rule across a large set of haptic tests relying on shape perception remains to be demonstrated. In addition, object recognition itself does not exclusively rely on shape features (e.g., Cooke et al., 2007) . The development of individual differences tests that tap into the visual processing of non-shape features could support efforts to more fully relate haptic and visual object recognition. In contrast to evidence of shared variance for our tasks relying on shape, performance in the hNOMT-buttons did not correlate with o v . We found Bayesian support against this correlation, as well as against a correlation between performance on this test and that on hMatch-Spaceships. To rule out the possibility that some property of the buttons stimulus set was responsible, such as their relative simplicity compared to the objects in our other tests, we converted the haptic button test into a visual test. The vNOMT-buttons correlate with estimates of o v , suggesting that our findings in the haptic modality are not an idiosyncrasy of the buttons stimulus set. The hNOMT-buttons and hMatch-Spaceships plausibly tap into different mechanisms, though we acknowledge that a lack of correlation provides insufficient evidence for such a strong conclusion. The interpretation is complicated by the fact that the two tasks differ on a few dimensions. First, learning and memory are more relevant to the NOMT format than to the Matching format. However, the shared variance between tests in both of these formats visually contributed to the estimate of o v, which did correlate with the vNOMT-buttons. Therefore, the learning and memory requirements of the hNOMT-buttons appear unlikely to limit its correlation with o v. Second, unlike the other stimulus sets, the buttons are not necessarily novel objects. However, it is unlikely that object familiarity is the critical factor, as object recognition ability with familiar and novel objects has been found to be very strongly correlated (Richler et al., 2017; Sunday et al., 2021) . Therefore, we suggest that the critical difference between the haptic and visual version of the NOMT-buttons, just as with the critical commonality between the hMatch-Spaceships and our visual tests, has to do with the type of information most useful for discrimination. The size and flat nature of the buttons mounted onto cards meant that information extraction was limited to the fingertips. In contrast, the Spaceships were less restrictive, allowing the encoding of large shape information through hand-grasping or smaller details using more intermediate grasps like pinches (in contrast, the manner of presentation of the buttons did not allow pinching of the buttons, only enclosing them with several fingertips, rubbing or tracing them). That is, while both types of objects have shape and texture features, the constraints to their exploration plausibly results in different types of features being used for object recognition. Early visuo-haptic object recognition research reported similar distinctions between simple flat texture-like objects compared to realistic complex objects (Cashdan, 1968; Klatzky et al., 1985) . While we believe that the exploratory procedures (and therefore the type of information used for object discrimination) could be the critical difference between the two haptic tests, we did not manipulate this systematically. Participants were not instructed on how to manipulate, grasp, or touch the objects. How they chose to explore buttons vs. spaceships was different and may have been constrained by the task (for instance, recognizing an object rather than picking it up or using it), the time limitations we imposed and the presentation (objects fixed to a horizontal surface). Future work could more systematically study exploratory procedures by varying each of these factors. This may help to elucidate how different exploratory procedures may tap into separate haptic object recognition abilities. We acknowledge further limitations of the present work. The reliability of the hMatch-Spaceships was modest, especially using d-prime scores. This stems from the challenge of the timeconsuming testing in the haptic format. The matching format with two options, partly because chance is higher, is not as efficient as the 3-AFC format used in the NOMT tests. In both modalities, we achieve acceptable reliability with less than 60 trials. In contrast, our visual matching task used 360 trials, while the haptic matching task used only 62. When the number of trials is limited, a good selection of the best trials can improve reliability and indeed, the Spearman-Brown prophecy formula suggests that if we ran only 62 trials of the visual matching test, reliability would reduce from 0.78 to 0.38. The fact that the reliability of our 62 haptic matching trials is higher than this (0.54) suggests that increasing the number of trials and continuing to hone the test based on item analyses may achieve a good compromise. Unrelated to scientific considerations, because of the COVID-19 crisis during the second phase of our data collection, we could not compare a visual and a haptic version of the NOMT-buttons test in the same participants or develop new haptic tests to directly test hypotheses about the critical dimensions distinguishing our two haptic tests. We, therefore, opted to gather data in a visual button task to exclude a possible interpretation of the initial haptic results. We plan additional haptic studies in the future, including the development of several haptic tests with various categories that allow different types of exploration. One interesting option would be to design objects that can be explored by grasping or fingertip exploration depending on instruction or context, to investigate the effects on observed individual differences. Finally, our results suggest that we would gain from developing tests to explore individual differences in the processing of texture in the visual modality, to ask whether this ability is distinct from o in tasks where shape information is diagnostic. This study represents the first step in exploring individual differences in haptic object recognition and its correlation to visual object recognition ability. Ultimately, we hope this research program can follow the same direction as the work in vision that inspired it, by using multiple tasks and stimuli to converge onto a single theorized latent variable. This approach would allow for a stronger test of the hypothesis we formulate here, which is that measures of haptic object recognition allowing for hand exploration of shape features would load on a different factor than measures of haptic object recognition relying on constrained finger exploration of textures. With a battery of haptic tests, we could also measure the extent to which the general haptic object recognition factor (o h ) relates to o v in the context of structural equation modeling. As illustrated in the case of visual abilities, this approach can reveal stronger relationships (Richler et al., 2019; Sunday et al., 2021) than simple correlations (McGugin et al., 2012; Richler et al., 2017) because it provides estimates of relationships that are not attenuated by measurement error and construct-irrelevant variance. For the time being, our results suggest that the domain-general object recognition ability that was identified in prior work is not a purely visual ability, but more likely related to the processing of shape regardless of the input modality. Funding This work was supported by the David K. Wilson Chair Research Fund (Vanderbilt University). Data availability All data are available at https:// osf. io/ cxpqf/. Code availability Code for in-lab experiments available at https:// osf. io/ cxpqf/. Convergence of visual and tactile shape processing in the human lateral occipital complex Visual long-term memory has a massive storage capacity for object details Towards a better understanding of parallel visual processing in human vision: evidence for exhaustive analysis of visual information Visual and haptic form discrimination under conditions of successive stimulation Domain-specific and domaingeneral contributions to reading musical notation On the choice of grasp type and location when handing over an object Multimodal similarity and categorization of novel, three-dimensional objects The Cambridge car memory test: a task matched in format to the Cambridge face memory test, with norms, reliability, sex differences, dissociations from face memory, and expertise effects Visuo-haptic integration in object identification using novel objects Orientation dependence in the recognition of familiar and novel views of three-dimensional objects Categorizing natural objects: a comparison of the visual and the haptic modalities. Experimental Brain Research Visual and haptic perceptual spaces show high similarity in humans Domain-specific and domain-general individual differences in visual object recognition Translating experimental paradigms into individual-differences research: contributions, challenges, and practical recommendations. Consciousness and Cognition The reliability paradox: why robust cognitive tasks do not produce reliable individual differences Long-term memory for haptically explored objects: fidelity, durability, incidental encoding, and cross-modal transfer The theory of probability Identifying objects from a haptic glance Identifying objects by touch: an "expert system What's new in Psychtoolbox-3? https:// schol ar. google. de/ citat ions? view_ op= view_ citat ion& hl= en& user= EO6eQ RkAAA AJ& citat ion_ for_ view= EO6eQ RkAAA AJ Mental representation in visual/ haptic crossmodal memory: evidence from interference effects Object familiarity modulates the relationship between visual object imagery and haptic shape perception Spatial imagery in haptic shape perception Perception, object kind, and object naming Hand movments: a window into haptic object recognition Haptic perception: a tutorial. Attention The Vanderbilt Expertise Test reveals domain-general and domain-specific sex effects in object recognition Viewpoint dependence in visual and haptic object recognition The assessment and analysis of handedness: The Edinburgh Inventory Canonical perspective and the perception of objects. Attention and Performance, IX The Raven's progressive matrices: change and stability over culture and time General object recognition is specific: evidence from novel and familiar objects Individual differences in object recognition Basic objects in natural categories Behavioral development and construct validity: the principle of aggregation Dual pathways for haptic and visual perception of spatial and texture information Haptic shape processing in visual cortex Demonstration of formulae for true measurement of correlation Correlation calculated from faulty data Both fluid intelligence and visual object recognition ability relate to nodule detection in chest radiographs Novel and familiar object recognition rely on the same ability Symmetry, broken symmetry, and handedness in bimanual coordination dynamics Dual-hemisphere tDCS facilitates greater improvements for healthy subjects' nondominant hand compared to uni-hemisphere stimulation How to quantify the evidence for the absence of a correlation Differential weighting: a review of methods and empirical studies A default Bayesian hypothesis test for correlations and partial correlations How to use individual differences to isolate functional organization, biology, and utility of visual functions; with illustrative proposals for stereopsis Canonical views in haptic object perception We have no conflicts of interest to disclose.