key: cord-0024595-qgi1kb1t authors: Russell, Madisen T.; Funsch, Kensie M.; Springfield, Cassi R.; Ackerman, Robert A.; Depp, Colin A.; Harvey, Philip D.; Moore, Raeanne C.; Pinkham, Amy E. title: Validity of remote administration of the MATRICS Consensus Cognitive Battery for individuals with severe mental illness date: 2021-12-02 journal: Schizophr Res Cogn DOI: 10.1016/j.scog.2021.100226 sha: 34780a02dca85f0a2e4a612935f39da45225d951 doc_id: 24595 cord_uid: qgi1kb1t The MATRICS Consensus Cognitive Battery (MCCB) is a gold-standard tool for assessing cognitive functioning in individuals with severe mental illness. This study is an initial examination of the validity of remote administration of 4 MCCB tests measuring processing speed (Trail Making Test: Part A, Animal Fluency), working memory (Letter-Number Span), and verbal learning and memory (Hopkins Verbal Learning Test-Revised). We conducted analyses on individuals with bipolar disorder (BD) and schizophrenia-spectrum disorders (SCZ), as well as healthy volunteers, who were assessed in-person (BD = 80, SCZ = 116, HV = 14) vs. remotely (BD = 93, SCZ = 43, HV = 30) to determine if there were significant differences in performance based on administration format. Additional analyses tested whether remote and in-person assessment performance was similarly correlated with symptom severity, cognitive and social cognitive performance, and functional outcomes. Individuals with BD performed significantly better than those with SCZ on all MCCB subtests across administration format. Animal Fluency did not differ by administration format, but remote participants performed significantly worse on Trail Making and HVLT-R. On the Letter-Number Span task, individuals with bipolar disorder performed significantly better when participating remotely. Finally, patterns of correlations with related constructs were largely similar between administration formats. Thus, results suggest that remote administration of some of the MCCB subtests may be a valid alternative to in-person testing, but more research is necessary to determine why some tasks were affected by administration format. Cognitive impairment is a central feature of schizophrenia spectrum disorders . Individuals with schizophrenia demonstrate cognitive deficits across numerous domains, including attention, verbal learning and memory, processing speed, working memory, and executive functioning (e.g., Nuechterlein et al., 2004; Kahn and Keefe, 2013; Bowie and Harvey, 2006; Bora et al., 2009) with average weighted effect sizes (Hedge's g) ranging from 0.43 to 1.55 (Schaefer et al., 2013) . Most patients with schizophrenia demonstrate cognitive impairment to some extent, though the breadth and severity of cognitive dysfunction varies . Within an individual, the level of cognitive impairment appears to be stable across time and fluctuations in clinical status (Harvey et al., 1999; Gold, 2004) . Similarly, bipolar disorder is also associated with cognitive deficits across the same domains (e.g., Cardenas et al., 2016; Bora and Ozerdem, 2017) with average weighted effect sizes (Hedge's g) ranging from 0.42 to 0.96 (Torres et al., 2007; Mann-Wrobel et al., 2011) . Though cognitive impairment in bipolar disorder is relatively less severe than impairments observed in schizophrenia, cognitive profiles of both disorders are very similar (e.g., Bortolato et al., 2015; Krabbendam et al., 2005; Lynham et al., 2018; Bora and Pantelis, 2015; Reichenberg et al., 2008) . Recent work examining cognitive impairment across the bipolar-schizophrenia spectrum suggests that cognitive dysfunction increased in severity from bipolar disorder to schizoaffective bipolar type, to schizophrenia and schizoaffective depressive type, with no differences in severity of cognitive impairment between schizophrenia and schizoaffective depressive type (Lynham et al., 2018) . Overall, the trajectory of cognitive dysfunction in bipolar disorder appears somewhat similar to schizophrenia, with impairment beginning early in both disorders and remaining relatively stable over time after the diagnosis of the disorder (e.g., Bora and Ozerdem, 2017; Bora and Pantelis, 2015) . Importantly, cognitive dysfunction contributes significantly to functional disability in both schizophrenia and bipolar disorder. In schizophrenia, cognitive impairment is associated with poorer community living skills, deficits in problem-solving, and difficulty maintaining employment (Bryson and Bell, 2003; Green et al., 2000) . Estimates derived from reviews of the literature suggest that neurocognitive dysfunction explains between 20% and 60% of the variance in functional outcomes of individuals with schizophrenia (Green et al., 2000; Fett et al., 2011) . Similarly, cognitive disability accounts for a significant proportion of variation in functioning in bipolar disorder, with estimates consistent with those identified in schizophrenia (Depp et al., 2012) . Although there is greater functional disability in schizophrenia relative to bipolar disorder, neurocognitive dysfunction still predicts poorer work skills, poorer community living skills, and difficulties in interpersonal behavior in bipolar disorder, with evidence suggesting that the structure of the correlational relationships are essentially identical (Bowie et al., 2010; Mausbach et al., 2010) . Given the cognitive deficits and associated functional outcomes seen in both bipolar disorder and schizophrenia spectrum disorders, it is important to have a standardized way to assess cognition in these groups. The MATRICS Consensus Cognitive Battery (MCCB; Nuechterlein et al., 2008) was developed to standardize assessment of cognitive impairment in schizophrenia and is typically considered the gold standard in the field. It contains ten tasks covering seven cognitive domains: processing speed, verbal learning, working memory, visual learning, reasoning/problem solving, social cognition, and attention. Although there have been suggestions regarding modifying the MCCB for bipolar disorder (Yatham et al., 2010) , studies demonstrate that the MCCB is sensitive to cognitive impairment in both schizophrenia and bipolar disorder (Bo et al., 2017; Burdick et al., 2011; Kern et al., 2011; Lystad et al., 2014) . Abbreviated forms of the MCCB (e.g., Pinkham et al., 2018) and similar neurocognitive assessments (Keefe et al., 2006) capture schizophrenia-related impairment while still showing expected correlations with functional outcomes. In situations where in-person testing may not be feasible (e.g., due to health concerns, lack of transportation, limited funds, etc.), it is crucial to have a reliable, remote battery to assess cognitive functioning in individuals with severe mental illness. To our knowledge, no remote version of the MCCB has currently been developed, but web-based and smartphone app assessments designed to mirror widely used assessments like the MCCB show strong correlations with in-person measures in schizophrenia spectrum, bipolar, and healthy control populations (Biagianti et al., 2019; Domen et al., 2019; Miskowiak et al., 2021) . Though they still require validation for at-home administration, these assessments appear to be comparable, albeit not identical, alternatives to traditional assessment. However, limited technology literacy and lack of access to technology and internet could hinder implementation of internet-and app-based assessments in certain populations. Telephonebased assessment offers a potentially more accessible solution but has received less attention within psychiatric populations. Notably, Berns et al. (2004) compared performance on an in-person and telephonebased cognitive battery in outpatients with schizophrenia. They found no difference between administration modes on tasks that were conceptually simple or that gradually increased in complexity, such as Letter Number Span (Gold, 1997) . Tasks that were complex and demanding from the outset, however, showed poorer performance when administered by phone, such as the California Verbal Learning Test (CVLT; Delis et al., 1987) . At-home telephone assessment has been more thoroughly studied in non-psychiatric cognitively impaired populations. Telephone versions of two MCCB tasks, the Hopkins Verbal Learning Test-Revised (Brandt and Benedict, 2001 ) and a category fluency task, demonstrated strong correlations with in-person assessments and good discrimination between cognitively impaired and healthy participants (e.g., Bunker et al., 2016; Lachman et al., 2014) . Other tasks not contained within the MCCB, but which assess overlapping cognitive domains (e.g., verbal learning, memory, processing speed) have similarly shown good agreement between telephone and in-person administration (Jagtap et al., 2021; Lachman et al., 2014; Rapp et al., 2012) . Given these data suggesting that remote administration of cognitive assessments like the MCCB may be a feasible and valid alternative to inperson testing, the current paper presents results from an initial assessment of the validity of telephone-based administration of select MCCB subtests in individuals with schizophrenia/schizoaffective disorder and bipolar disorder. Task performance was compared between individuals who completed in-person assessments vs. those who completed them remotely, as well as between diagnoses. Correlations between task performance and symptoms and functional outcomes were also compared between administration formats. Based on previous findings, we anticipated that individuals with bipolar disorder would perform better than individuals with schizophrenia spectrum disorders. Similarly, given previous findings that telephone based assessments are comparable to in-person assessments, we also predicted minimal effects of administration format on task performance and similar patterns of correlations between most MCCB tasks and related constructs regardless of administration format. For HVLT however, and based on Berns et al. (2004) , we anticipated that performance could be poorer for remote administration in the SCZ group. Participants were adults between the ages of 18 and 60 with schizophrenia/ schizoaffective disorder (SCZ), bipolar disorder I or II (BP), or non-psychiatric healthy controls (HV). Psychiatric diagnoses were confirmed via the Mini International Neuropsychiatric Interview (MINI) (Sheehan et al., 1998) and Structured Clinical Interview for DSM Disorders-Psychosis Module (SCID) (First et al., 2015) . Individuals were required to be proficient in English, as well as to have had no psychiatric hospitalizations for at least 6 weeks, no significant medication regimen changes for a minimum of 6 weeks, and no dose changes >20% for a minimum of 2 weeks. Additionally, participants could not have (1) presence or history of medical or neurological disorders that may affect brain function (e.g., stroke, epilepsy), (2) presence or history of neurodegenerative disorder (e.g., dementia, Parkinson's Disease), (3) history of unconsciousness for a period greater than 15 min, (4) significant impairment of visual (e.g., blindness, glaucoma, vision uncorrectable to 20/40) or hearing (e.g., hearing loss) abilities, (5) presence or history of pervasive developmental disorder (e.g., autism) or intellectual disability (defined as IQ <70), or (6) current diagnosis of substance use disorder. Data were collected across three sites between December 2018 and June 2021: The University of Texas at Dallas, University of Miami, and University of California, San Diego, resulting in a total of 376 participants. Data were collected as part of a larger study, during which COVID-19 related restrictions on in-person data collection necessitated a transition to remote assessment. Participants were separated into groups based on MCCB administration format and mental health diagnosis: 166 completed the MCCB subtests remotely (93 BD, 43 SCZ, 30 HV) and 210 completed the MCCB subtests in-person (80 BD, 116 SCZ, 14 HV). Nuechterlein et al., 2008) Participants completed four tests from the MCCB, two measuring processing speed (Trail Making Test: Part A, Category Fluency: Animal Naming), one assessing verbal working memory (Letter-Number Span), and one assessing verbal learning and memory (Hopkin's Verbal Learning Test-Revised). The Trail Making Test (TMT): Part A (range 0-300 s) is a timed paper-and-pencil task in which participants draw a single line to consecutively connect numbered circles placed irregularly on a sheet of paper. The Category Fluency: Animal Naming test is an oral test in which participants name as many animals as they can in a oneminute period. The Letter-Number Span test (range 0-24) is an orally administered test in which the tester reads a string of numbers and letters, and the participant mentally reorders them (numbers consecutively, then letters alphabetically) and repeats them back. The HVLT-R (range 0-36) is an orally administered test in which the researcher reads aloud a list of 12 words from three different categories and the participant is asked to recall as many words as possible after each of three learning trials. Measures assessing global cognitive, social cognitive, and real-world functioning were completed separately by an informant and the research coordinator. Informants were high-contact individuals who knew the participant well and who themselves did not have any psychiatric diagnoses (e.g., first-degree relative, significant other, close friend). All informant reports were collected via telephone. Research coordinators generated ratings using an "all-sources" approach consistent with Harvey et al. (2019) that integrated information gathered from interviews with the patients, informants, and their own experiences with the participants. The Specific Levels of Functioning Scale (SLOF; Schneider and Streuening, 1983 ) is a 30-item survey assessing participants' functioning and behavior across 4 domains: interpersonal relationships, social acceptability, activities of community living, and work skills. Informants responded to items using a 5-point Likert scale, with higher mean values representing better functioning in each domain. The Observable Social Cognition Rating Scale (OSCARS; Healey et al., 2015) is an 8-item self-report or interviewer assessment of ability across social cognitive domains (i.e., theory of mind, emotion perception, cognitive rigidity, jumping to conclusions, attributional style), yielding a total score ranging from 8 to 56, with higher scores indicating greater impairment. The Cognitive Assessment Interview (CAI; Ventura et al., 2010) assesses subjective cognitive functioning across 6 domains (10-items): (1) working memory, (2) attention/vigilance, (3) verbal learning and memory, (4) reasoning and problem-solving, (5) speed of processing, and (6) social cognition. The CAI was administered to informants as an oral semi-structured interview, and ratings were made by the researcher according to participant/informant responses. In addition to a total score comprised of the sum of all items (range 7-70), a global assessment of function score is also given (range 0-100). Both indices were used here, with higher scores indicating worse cognitive functioning on the summed score and better cognitive functioning on the global assessment of function score. Severity of positive, negative, and general symptoms was assessed with the Positive and Negative Syndrome Scale (PANSS; Kay et al., 1987) . Mood symptoms were further assessed with the Montgomery-Asperg Depression Rating Scale (MADRS; Montgomery and Asberg, 1979 ) and the Young Mania Rating Scale (YMRS; Young et al., 1978) . For all measures, higher scores indicate greater severity. Estimated premorbid IQ was assessed using the Wide Range of Achievement Test III (WRAT-III; Snelbaker et al., 2001) Reading subtest. All participants provided documented informed consent, and IRBs at the University of Texas at Dallas, University of California San Diego, and University of Miami approved the study. In-person visits took place in labs on campus, while remote visits were done via telephone. Research staff had a bachelor's degree or higher, and were trained over the course of several weeks, within and across sites, to administer and score all assessments in-person and remotely. After establishing reliability (ICCs>0.80), regular consensus meetings were held to ensure acceptable reliability between raters over time. All tasks and interviews were completed primarily via telephone and required minimal modification. Those tasks that are typically administered orally (i.e., Animal Fluency, Letter-Number Span, HVLT-R) were implemented as is. The forms needed to complete the MCCB Trail Making Test were mailed to participants in advance of their appointments in separate, sealed envelopes with instructions to only open these materials when prompted and observed by the examiner. A supplemental video call via smartphone or tablet was used during the Trail Making Test so that researchers could accurately gauge participants' time to completion. Participants were texted or emailed a link to view the WRAT-III stimuli. PANSS ratings for 4 items that required prolonged visual behavioral observations (i.e., Blunted Affect, Tension, Mannerisms and Posturing, and Motor Retardation) were omitted from both inperson and remote participants' total scores. Prior to task administration, participants were instructed to move to a quiet environment without distractions (e.g., away from other individuals, silencing/powering down extraneous devices) and researchers ensured that the participant could hear them well. Participants were also asked to refrain from using any performance aids, such as writing down stimulus items, or seeking help from others. Groups were first split by diagnosis (BD, SCZ, HV), and demographic differences between administration format groups (remote vs. in-person administration) were assessed using independent sample t-tests or Chi-Square tests (x 2 ) as appropriate. Extreme outliers (+/− 3 SDs) on each task were excluded from analyses task-by-task, resulting in slight N differences between tasks (see Table 2 ). Because Trail Making scores are completion times, they were most likely to be outliers, and thus excluded than scores from other tasks. The numbers of participants performing at levels consistent with floor/ceiling effects on each of the MCCB subtests were also assessed to evaluate score distributions in each administration format. Separate two-way analysis of covariance (ANCOVA) tests were then conducted to identify statistically significant effects of diagnosis (BD, SCZ) and administration format on MCCB test performance, controlling for PANSS symptom ratings (positive, negative). An additional independent-samples t-test was utilized to examine the effects of administration format among healthy participants. PANSS ratings were converted to averages for each participant, to account for the difference in number of items rated between administration types. To determine differences in the strength of associations between related constructs as a function of administration format, we analyzed comparisons between Pearson's r correlations by performing Fisher's r to z transformation, then calculating observed z values (z observed ). All groups were similar on age, gender, race, ethnicity, years of education, and estimated IQ (see Table 1 ). Compared to remote participants, in-person BD and SCZ groups had significantly higher ratings of positive symptoms (t BD (171) = 3.946, p < .001; t SCZ (157) = 3.324, p = .001), and in-person individuals with BD also had higher ratings of negative symptoms (t(171) = 2.945, p = .004). Outliers represented less than 5% of the overall sample and were distributed relatively evenly across groups (see Table 2 ). No participants in either administration format performed at ceiling or floor on any of the tasks. ANCOVAs were conducted separately for each MCCB subtest: (1) Trail Making, (2) Letter-Number Span, (3) Animal Fluency, and (4) HVLT-R. Descriptive statistics for performance are provided in Table 2 . As anticipated, there was a significant main effect of group (BD vs. SCZ) on all MCCB tests: Trail Making (F(1,310) = 6.544, p = .011), Letter-Number Span (F(1, 326) = 6.665, p = .003), Animal Fluency (F(1, Six scores were excluded from TMT-A (1 BD in-person, 1 BD remote, 2 SCZ in-person, 1 SCZ remote, 1 HV in-person) and 2 scores were excluded from HVLT-R (1 BD inperson, 1 SCZ in-person). Due to various extraneous circumstances, twelve individuals did not complete the TMT-A (4 BD remote, 7 SCZ remote, 1 HV remote), two did not complete Animal Fluency (1 BD remote, 1 SCZ in-person), and one individual did not complete the HVLT-R (1 SCZ in-person). * p < .05. ** p < .01. 324) = 12.985, p < .001), HVLT-R (F(1,321) = 12.056, p < .001), showing that individuals with bipolar disorder performed significantly better than individuals with schizophrenia/schizoaffective disorders across all tasks. There was a significant main effect of administration format on Trail Making (F(1, 310) = 22.393, p < .001) and HVLT-R (F(1, 321) = 6.499, p = .007), with remote participants performing worse on both tasks. No other tasks showed significant main effects of format. The only task that had a significant interaction between diagnosis and administration format was Letter-Number Span (F(1, 326) = 4.487, p = .05), indicating that individuals with bipolar disorder performed significantly better on this task when it was administered remotely. There were no statistically significant differences in MCCB task performance by administration format in the healthy volunteer group (all p values > .05). Across diagnostic groups and administration formats, higher severity of negative symptoms significantly correlated with poorer performance on several MCCB tests. In BD, positive symptoms, depression (MADRS), and mania (YMRS) did not correlate with performance on any MCCB tests (Table 3 ). In the SCZ group, increased positive symptoms significantly correlated with higher scores on Animal Fluency, regardless of administration format, and increased depressive symptoms were positively correlated with HVLT-R performance in the remote group (Table 4) . Across administration types and diagnostic groups, both informantand RA-ratings of SLOF, OSCARS, and CAI were significantly associated with performance on several MCCB tests at different correlation strengths. As expected, correlations with MCCB tasks were strongest for the CAI, which assesses cognitive functioning (Tables 3 & 4) . There were relatively few differences in correlation strengths across administration formats (see Supplemental Table 1 ). For the BD group, Trail Making showed the highest number of discrepancies based on remote vs. in-person administration, with 4 pairs of correlations (out of 19) showing significant differences. Letter-Number Span had no differences, whereas Animal Fluency and HVLT-R each had one (see Table 3 ). For the SCZ group, Letter-Number Span and HVLT-R had 4 and 3 discrepancies, respectively, between administration formats. Trail Making and Animal Fluency each had only one (see Table 4 ). Discrepancies were not concentrated in any particular domains; however, within the SCZ group, most discrepancies occurred for positive and negative symptoms, with stronger correlations in the remote group as compared to in-person. With advances in technology and potential limitations to in-person testing, adaptation of cognitive functioning assessments for remote administration may be a viable option for assessing cognitive abilities in individuals with severe mental illness. This study provides an initial assessment of the validity of remote administration of select MCCB subtests (Trail Making, Letter-Number Span, Animal Fluency, and HVLT-R) in individuals with schizophrenia-spectrum disorders and bipolar disorder. As anticipated, the bipolar group performed significantly better than the schizophrenia-spectrum group on all MCCB tasks, regardless of administration format, supporting previous research findings that individuals with bipolar disorder have higher levels of cognitive functioning than individuals with schizophrenia-spectrum disorders (Bortolato et al., 2015; Krabbendam et al., 2005; Lynham et al., 2018; Bora and Pantelis, 2015) . Additionally, this finding provides some validation for remote telephone administration of the MCCB given that group differences were evident in both formats. Further supporting the validity of remote assessment, we found that, across diagnostic groups and administration formats, MCCB task performance was significantly correlated with symptom severity (especially negative symptoms), social functioning, and overall cognitive functioning, which is in line with previous research findings (e.g., August et al., 2012; Harvey et al., 2006) . While the strength of some correlations varied between administration formats, only 8.89% of these strength Table 3 Task correlations for in-person and remote administration in BD. differences were significant in the BD group, and 11.11% were significantly different in the SCZ group, suggesting comparable patterns of correlation for the majority of tasks. While it is not possible to draw definitive conclusions from the current data, it is possible that having a social versus non-social environment during testing, as well as symptom severity, and task attention/engagement may explain some of the correlation strength differences. Future research should look at factors that may moderate remote versus in-person cognitive performance's relationship with symptom severity, as well as social and non-social functioning. In terms of specific subtests, administration format did not appear to affect performance on the Animal Fluency task. This test has relatively short and simple instructions and does not require any back-and-forth between the administrator and participant, which may explain why we did not see any significant difference in performance between administration types. Thus, Animal Fluency can be validly administered via telephone. However, performance on both HVLT-R and Trail Making were worse when administered remotely. As noted previously, tasks that are complex and demanding from the outset, like HVLT, may be more difficult when administered by phone versus in-person, and similar findings have been reported for the CVLT (Berns et al., 2004) . Slower completion of remotely administered Trail Making may be related to reduced control over participants' testing environments, as well as technological difficulties (e.g., difficulty setting up devices for video call, potential lag in video call making corrections processes take more time, assessor not able to point directly at participants' papers during mistake corrections, etc.). Future studies administering these two tasks may consider conducting thorough prescreening to ensure strong audio and video connections or additional practice trials to ensure participant understanding. Individuals with bipolar disorder performed better on the Letter-Number Span task when administered remotely versus in-person, but the reasons for this are unclear. We did not see the same pattern in individuals with schizophrenia, concurrent with Berns et al.'s (2004) findings. While premorbid IQ was slightly higher in the remote BD group, accounting for variability related to IQ had only a minimal effect on the results, increasing the p-value very minimally (p = .057). IQ differences are therefore unlikely to account for the interaction effect. Because administrators were not on video with participants during this task, it is also possible that participants could have been cheating; however, because BD remote participants did worse on HVLT-R, for which they could have also written down the items, this seems unlikely. Some limitations require consideration. First, only between-subject comparisons were assessed. Definitive attempts to examine the validity of remote assessments would require within-person comparisons between formats. Second, our sample of healthy individuals was relatively small, as was the SCZ remote group. Third, while the current analyses addressed sensitivity to group differences, relationships to functional outcomes, and floor/ceiling effects, a full psychometric analysis that allows examination of test-retest reliability and utility as repeated measures is still needed. Fourth, the COVID-19 pandemic provided impetus for the adaptation of our measures to remote administration, and remote data collection occurred exclusively during the pandemic. Therefore, it is possible that differences in administration formats may be confounded with presence of the pandemic. Finally, while the prospect of remote assessment has many feasible benefits, it is important to note that this format may also be less accessible than inperson testing to different demographic groups (e.g., those with reduced access to video calling, etc.). Overall, while not definitive, our results suggest remote telephone-based administration of some MCCB tests may be a feasible and valid method for assessing cognitive functioning in individuals with bipolar disorder or schizophrenia spectrum disorders. Supplementary data to this article can be found online at https://doi. org/10.1016/j.scog.2021.100226. This work was supported by the National Institute of Mental Health (grant number R01 MH112620 to A.P). Madisen T. Russell: Formal analysis; Investigation; Resources; Data curation; Writing-Original draft; Writing-Review & Editing; The MATRICS Consensus Cognitive Battery (MCCB): Clinical and cognitive correlates Telephone administration of neuropsychological tests can facilitate studies in schizophrenia Development and testing of a webbased battery to remotely assess cognitive health in individuals with schizophrenia Use of the MATRICS consensus cognitive battery (MCCB) to evaluate cognitive deficits in bipolar disorder: a systematic review and meta-analysis Meta-analysis of longitudinal studies of cognition in bipolar disorder: comparison with healthy controls and schizophrenia Meta-analysis of cognitive impairment in first-episode bipolar disorder: comparison with first-episode schizophrenia and healthy controls Cognitive functioning in schizophrenia, schizoaffective disorder and affective psychoses: meta-analytic study Cognitive dysfunction in bipolar disorder and schizophrenia: a systematic review of metaanalyses Cognitive deficits and functional outcome in schizophrenia Prediction of real-world functional disability in chronic mental disorders: a comparison of schizophrenia and bipolar disorder Hopkins verbal learning test-revised Initial and final work performance in schizophrenia: cognitive and symptom predictors The SAGES telephone neuropsychological battery: correlation with in-person measures The MATRICS consensus cognitive battery in patients with bipolar I disorder Neurocognitive functioning in euthymic patients with bipolar disorder and unaffected relatives: a review of the literature The California Verbal Learning Test. Psychological Corporation Meta-analysis of the association between cognitive abilities and everyday functioning in bipolar disorder The validation of a new online cognitive assessment tool: the mycognition quotient The relationship between neurocognition and social cognition with functional outcomes in schizophrenia: a meta-analysis Structured Clinical Interview for DSM-5-Research Version (SCID-5 for DSM-5 Auditory working memory and Wisconsin card sorting test performance in schizophrenia Cognitive deficits as treatment targets in schizophrenia Neurocognitive deficits and functional outcome in schizophrenia: are we measuring the "right stuff Cognitive decline in late-life schizophrenia: a longitudinal study of geriatric chronically hospitalized patients Negative symptoms and cognitive deficits: What is the nature of their relationship? Autism symptoms, depression, and active social avoidance in schizophrenia: association with self-reports and informant assessments of everyday functioning Observable social cognition-A rating scale: an interview-based assessment for schizophrenia Known-groups and convergent validity of the telephone rey auditory verbal learning test total learning scores for distinguishing between older adults with amnestic cognitive impairment and subjective cognitive decline Schizophrenia is a cognitive illness: time for a change in focus The positive and negative syndrome scale (PANSS) for schizophrenia Baseline neurocognitive deficits in the CATIE schizophrenia trial The MCCB impairment profile for schizophrenia outpatients: results from the MATRICS psychometric and standardization study Cognitive functioning in patients with schizophrenia and bipolar disorder: a quantitative review Monitoring cognitive functioning: psychometric properties of the brief test of adult cognition by telephone Examining cognition across the bipolar/schizophrenia diagnostic spectrum The MATRICS consensus cognitive battery (MCCB): performance and functional correlates Meta-analysis of neuropsychological functioning in euthymic bipolar disorder: an update and investigation of moderator variables Relationship of the brief UCSD performance-based skills assessment (UPSA-B) to multiple indicators of functioning in people with schizophrenia and bipolar disorder Internet-based cognitive assessment tool: sensitivity and validity of a new online cognition screening tool for patients with bipolar disorder A new depression scale designed to be sensitive to change Identification of separable cognitive factors in schizophrenia The MATRICS consensus cognitive battery, part 1: test selection, reliability, and validity Social cognition psychometric evaluation: results of the final validation study Validation of a cognitive assessment battery administered over the telephone Neuropsychological function and dysfunction in schizophrenia and psychotic affective disorders The global cognitive impairment in schizophrenia: consistent over decades and around the world SLOF: a behavioral rating scale for assessing the mentally ill The Mini-international neuropsychiatric interview (MINI): the development and validation of a structured diagnostic psychiatric interview for DSM-IV and ICD-10 Wide range achievement test 3 (wrat3) Neuropsychological functioning in euthymic bipolar disorder: a meta-analysis The cognitive assessment interview (CAI): development and validation of an empirically derived, brief interview-based measure of cognition The International Society for Bipolar Disorders-Battery for assessment of neurocognition (ISBD-BANC) A rating scale for mania: reliability, validity and sensitivity