key: cord-0879782-e50ghf8d authors: Krach, Shelley Kathleen; Paskiewicz, Tracy L.; Monk, Malaya M. title: Testing Our Children When the World Shuts Down: Analyzing Recommendations for Adapted Tele-Assessment during COVID-19 date: 2020-10-03 journal: J Psychoeduc Assess DOI: 10.1177/0734282920962839 sha: 3fd2c3fdc352fe2073826727f1db344a2b02904c doc_id: 879782 cord_uid: e50ghf8d In 2017, the National Association of School Psychologists described tele-assessment as the least researched area of telehealth. This became problematic in 2020 when COVID-19 curtailed the administration of face-to-face assessments. Publishers began to offer computer-adapted tele-assessment methods for tests that had only previously been administered in person. Recommendations for adapted tele-assessment practice had to be developed with little empirical data. The current study analyzed recommendations from entities including professional organizations, test publishers, and governmental offices. The samples for each were small, but the findings were noteworthy. Test publishers were unanimous in recommending the use of their face-to-face assessments through adapted tele-assessment methods (either with or without caution). Governmental agencies were more likely to recommend not using adapted tele-assessment methods or to use these methods with caution. Finally, professional organizations were almost unanimous in their recommendations to use adapted tele-assessment but to do so with caution. In addition to deviations in the types of recommendations provided, entities varied in how the information was distributed. About one-fifth (23.5%) of all entities surveyed provided no recommendations at all. About 45% of the remaining entities provided recommendations on their Web sites. The rest provided information through shared documents, online toolkits, peer-reviewed journals, and emails. Implications for the field of psychology’s future crisis management planning are discussed in response to these findings. Ideally, any test adapted to a tele-assessment format from a paper-based and/or a face-to-face method should demonstrate psychometric equivalency. The requirements for psychometric equivalency were provided by the American Psychological Association Committee on Professional Standards and Committee on Psychological Tests and Assessment (1986) in their Guidelines for Computer-Based Tests and Interpretations and the joint commission Standards for Educational and Psychological Testing (American Education Research Association [AERA] , American Psychological Association [APA] , & National Counsel on Measurement in Education [NCME], 2014). These equivalency standards require the following: (a) inter-version equivalency correlations, (b) mean score differences that are not statistically significantly different with small effect sizes, and (c) the score dispersion shapes are not statistically significantly different from one another. Two additional guidelines should also be considered: (d) demographic equivalency between the study sample and the original normative sample (Grosch, Gottlieb, & Cullum, 2011; Hodge et al., 2019; Krach, McCreery, Dennis, Guerard, & Harris, 2020) and (e) sufficient sample size to ensure the statistical power to perform the equivalency analyses (Cohen, 1988; Farmer et al., 2020a) . All these criteria must be demonstrated to state definitively that two forms of a test are equivalent. Unfortunately, the literature is scarce regarding the use of face-to-face, diagnostic, direct assessments through an adapted tele-assessment method. Only the following tests have been evaluated for this purpose: the Woodcock-Johnson Tests of Cognitive and Achievement, Fourth Edition (WJ-IV: COG and WJ-IV: ACH; Wright, 2018b) , the Reynolds Intelligence Assessment Scale (RIAS; Wright, 2018a) , and the Wechsler Intelligence Scale for Children, Fifth Edition (WISC-V; Wright, 2020) . None of these studies met all the requirements set forth to definitively support version equivalency. Although equivalency research examining Pearson's paper-based and Q-Interactive (Qi) methods (Daniel, 2012a (Daniel, , 2012b (Daniel, , 2013a (Daniel, , 2013b (Daniel, , 2013c Daniel, Wahlstrom, & Zhang, 2014a; Daniel, Wahlstrom, & Zhou, 2014b) exists, these studies have all been performed using in-person (face-to-face) and not adapted tele-assessment methods (Cayton, Wahlstrom, & Daniel, 2012) . In these studies, both the test taker and test administrator were in the same room. COVID-19 challenged entities and individuals to change their minds and their practices regarding adapted tele-assessment (Farmer et al., 2020a (Farmer et al., , 2020b . One clear example comes from the NASP. In their Considerations for Delivery of School Psychological Telehealth Services (2017), NASP provided a list of concerns related to tele-assessment, specifically reporting, "most publishers do not currently have well-established or well-vetted systems established for conducting assessments virtually." Their recommendation at that time was that school psychologists only use "validated assessment tools and methods" (pgs. 4-5, & 25) . Given the findings discussed above, none of the adapted tele-assessments described in this article would meet the criteria. In essence, NASP stated that adapted tele-assessment should not be used. They reiterate this when stating that adapted teleassessment "results may not hold up in a legal proceeding, since test construction and norming samples did not include a sample of those who were administered the assessments remotely" (p. 22). In 2020, during the height of the COVID-19 crisis, NASP provided an updated set of recommendations for virtual service delivery. In this updated document, they emphasized caution in administering and interpreting assessment results provided through adapted tele-assessment. Specifically, they state that "high-quality evidence" is needed prior to using any assessment adapted for delivery using only "platforms designed for that purpose" (p. 2). They add that any alteration to the standardized form of assessment should be considered carefully when making diagnostic decisions and clearly documented in any resulting assessment report. Therefore, they move from stating not to use traditionally available tests as tele-assessments (National Association of School Psychologists, 2017) to suggesting cautious use. Farmer and colleagues (2020) go into additional detail as to how these types of policy/ recommendation shifts manifest. A shift may have occurred when practitioners were faced with limited choices for conducting direct assessments unless they adapted to tele-assessment methods. Therefore, NASP's position altered when practitioners were only given two choices: (1) delay Individuals with Disabilities Education Improvement Act [IDEA] (2004) child find requirements or (2) use methods of questionable validity. Initial recommendations regarding adapted tele-assessment became available in mid-March 2020. On March 16, 2020, the Office of Civil Rights posted a white paper to their Web site suspending face-to-face testing for all P-12 schools until schools reopened. Shortly after, on March 30, Pearson provided a webinar that gave step-by-step directions on how to use their Q-Interactive products in an adapted tele-assessment format (Henke, 2020) . Pearson's webinar indicated no restrictions to the use of their products. In addition, on April 3, the American Psychological Association (APA, Wright, Mihura, Pade, & McCord, 2020) published an article on its Web site reiterating that the use of face-to-face tests in adapted tele-assessment was fine as long as caution was used when interpreting the results. On the other hand, the American Psychiatric Association [ApA] (2020) produced a similar white paper indicating no restrictions at all. For practitioners, it must have seemed that adapted tele-assessment recommendations would change depending on what you read, when you read it, who stated it, and how you received it. These recommendation differences continued as school buildings remained closed and as states extended social distancing recommendations into the summer and fall of 2020, and the methods by which practitioners might access these recommendations varied widely as well. Distribution to practitioners might best be described as haphazard. Some recommendations were provided as announcements on the entities' Web sites (e.g., MHS; Wheldon, 2020), some were available through multimedia presentations (e.g., podcasts, webinars, and virtual workshops; e.g., Henke, 2020; Sharp, Mcfadden, & Morera, 2020) , and still others were shared documents stored in cloud drives (e.g., Wyoming State Board of Education, 2020). Methods of distributing the recommendations included advertisements and sharing on professional listservs, through online NASP communities and through personal and professional email correspondence. The current authors asked three main research questions related to these recommendations: (1) what information did practitioners receive regarding adapted tele-assessment methods for tests that were traditionally administered as face-to-face, in-person assessments, (2) was there a difference in the recommendation regarding adapted tele-assessment based on what type of entity was providing the information (i.e., test publisher, government agency, or professional organization), and (3) how were recommendations disseminated during the time of the COVID-19 crisis? Three categories of entity recommendations were evaluated. Psychological and mental healthrelated professional associations (e.g., APA and NASP) were chosen at the national and state levels. Test publishers were chosen if they provide commonly used diagnostic, direct assessments (e.g., Pearson and MHS). Finally, government agencies were chosen if they provide funding or legal guidelines for psychological services (e.g., U.S. Departments of Education and US Social Security Department). All related national professional organizations and test publishers were included. In addition to the national entities described above, a stratified randomized sample of 15 additional state entities was also included. Given that COVID-19 has been reported as a national pandemic by the Center for Disease Control (2020, https://www.cdc.gov/), a nationally differentiated sample was chosen. In preparation for doing a chi-squared analysis, each group needed a minimum of five entities per cell (McHugh, 2013) . Therefore, using US Census data, five state governmental and/or professional agencies were randomly selected by using a random number generator after separating states into one of three potential groups: (1) the 10 states with the highest populations, (2) the 10 states with the lowest populations, and (3) the states whose populations did not fall in either of these two categories. Recommendations were identified through a comprehensive Google search of entity Web sites, announced trainings/webinars through listservs, Facebook links from psychological assessment groups, and email responses directly from the entity. Information was pulled only from sources that were approved agents for disseminating information on behalf of the entity involved. Direct quotes related to adapted tele-assessment were pulled from all the following types of sources: emails, peer-reviewed and non-peer-reviewed journal articles, documents published online, official organizational webpages, blog and news posts, and toolkits specifically dedicated to the dissemination of COVID-19 resources and information. All data were collected between May 1 and July 1 of 2020. Table 1 provides information on the type of information evaluated as well as how the information was disseminated. Categorical determination analysis was conducted by a team of four individuals. One of the team members is a professor in the field of school psychology. Two of the others are psychology doctoral students who had completed a minimum of six semester hours of coursework on assessment administration and interpretation. The final rater is a graduate from an Ed.S. degree program in Clinical Mental Health Counseling who previously completed a minimum of three semester hours of coursework and 12 semester hours of internship on assessment administration and interpretation. A first round of data analysis was completed by the team. In this first round analysis, all four team members examined a subset of about 20 quotes on their own and assigning a descriptor/label to each. These descriptors/labels were compared thematically. Five categories were identified from these themes in this first round analysis. Three of these five categories remained unchanged for the final analysis: (1) "do not use," (2) "use with no concerns/restrictions," and (3) "no advice given." In the first-round analysis, two additional categories listed "mild caution" and "moderate caution" were initially identified. Due to the lack of interrater agreement, these two categories were condensed into a single category of "caution." Raters then developed an agreed-upon set of rules/definitions for each of the four categories prior to the final analysis. The final four categories were: (1) do not use, (2) use with no concerns/restrictions, (3) use with caution, and (4) no advice given. The rules/definitions for each of the categories are as follows. For the "no advice given" category, the entity may or may not have provided guidance on telehealth, but they were included in this category if the advice was not specific to direct, psychology-based, adapted tele-assessment. For the "no concerns/no restrictions" category, the entity must have provided guidance on adapted tele-assessment with no caveats as to how the tests should be used. For the "do not use" category, the entity explicitly stated not to use this method of assessment. . "If an evaluation of a student with a disability requires a face-to-face assessment or observation, the evaluation would need to be delayed until school reopens." In this example, the recommendation was categorized as do not use. Example 2 comes from the Web site position paper for the APA (Wright et al., 2020, para 2) . In this situation, the recommendation was categorized as caution: However, the situation is more challenging with assessment services that have standardized administration procedures that require in-person contact. In considering these challenges, some psychologists may choose to pause their psychological assessment services during this time; however, there are others who do time-sensitive, high-need, and/or high-stakes assessments that really need to continue. Most current and emerging telehealth guidelines largely focus on psychotherapy, and as such, tele-assessment guidance is necessary. Each statement was read aloud to the group in a Zoom meeting. The members submitted their votes to the professor once after she had voted. Votes were submitted through the Zoom chat feature with permission set for only the professor to view the responses. For the four categories, first-round, interrater, unanimous agreement was 90% (46 out of 51 cases). The remaining five cases had only one or two of the five members in disagreement. When this happened, the final decision was based on the majority opinion. What recommendations were provided? To address the first research question, findings were tabulated in a graphical manner for easy reference (Tables 1 and 2) . Was there a difference in recommendations across entities? To address the second research question, a chi-square analysis was attempted with entity type as the grouping variable: professional organizations, government agencies, and test publishers. The categorical variable included: (1) do not use, (2) use with no concerns/restrictions, and (3) use with caution. However, to run a chisquare analysis, two assumptions must first have been met (McHugh, 2013) : (1) a frequency of at least one response per cell and (2) 80% or more cells should have a frequency of at least five. The first chi-square assumption was violated due to issues with the publisher and the government agency grouping variables. None of the test publishers indicated "do not use" in their recommendations, creating the first null set. In addition, none of the government agency entities indicated that they had "no concerns," resulting in a second null set. The second assumption was also violated in that 55.6% of the cells had counts of less than five per cell. Given the nature of these recommendations, it is possible that collecting additional data would continue to result in at least one null set. So, instead of running the chi-square analysis, a graphical representation of the data was provided in a cross-tabulation in Table 2 . Percentages were calculated from all entities that provided advice and reported in the results section. How were the recommendations disseminated?. Tables 3 and 4 provide information about the manner in which data were provided. From this, percentages were calculated and presented in the results section. Note 1. Journal article = published, peer-reviewed article; guidelines/recommendations = specific advice given to be used by practitioners and parents; video recordings = webinars, conference presentations, and other recorded video or audio; blog/news = blog and news posts; facts = fact sheets and webpages; FAQ = frequently asked questions; position paper = position of entity regarding tele-assessment; statement = statement by entity on tele-assessment or call to action on tele assessment; policy/regulation/ethical code = legal and ethical guidance on tele-assessment. Note 2. Email = email communications with stakeholders; peer-reviewed journal = published article that was peer reviewed; publication = article published by an entity that was not in a peer-reviewed journal; document = working documents, PDFs, word documents, etc. published online; webpage = webpages, Web sites, and blogs; toolkits = webpages specifically dedicated to the dissemination of COVID-19 resources and information. Tables 1 and 2 provide information on the types of recommendations that were provided. About half (53.8%) of professional organizations suggested caution when using adapted teleassessments; whereas, over a third (38.5%) recommended that these types of assessments not be used at all. Over half (57.1%) of government agencies recommended that practitioners not use adapted tele-assessments and 42.9% recommended that they be used with caution. The majority (60%) of test publishers suggested caution when using their instruments, with fewer than half (40%) indicating no concerns. Test publishers never advised against the use of their products. Government agencies never advised the use of adapted tele-assessment without concerns. Note. Total number of entities = 51. Given that some entities provided recommendations through multiple methods, the percentages will not equal 100%. For professional organizations, 44.4% provided information on their Web sites compared to 39.3% of government agencies and 80% of test publishers. Tables 3 and 4 provide more specific information on other methods. At this time, no formal ethical codes exist specific to the use of adapted tele-assessment. The American Psychological Association's (APA) (2017) Ethical Principles of Psychologists and Code of Conduct, Standard 9.02, provides some guidance related to adapted assessments in general. They emphasize that any adapted instrument should be used in a manner supported by research-based evidence and add the need to disclose any issues related to score reliability, test validity, or other potential limitations. In 2014, Luxton, Pruitt, and Osenback published "Best Practices for Remote Psychology Assessment via Telehealth Technologies." In this article, the authors mention the need to maintain standardization when adapting existing measures for use in a tele-assessment, recommending to choose a different assessment method if standardization cannot be maintained. They specifically list in-person, Wechsler tests as poor candidates for teleassessment adaptation. Teleconferencing options have changed since this publication, as have the reasons for the need for adapted tele-assessment, so practitioners should depend on more current recommendations for how to move forward. To fill this gap, about four-fifths of the entities evaluated in this study offered some official statement on adapted tele-assessment; all of the test publishers had an official statement. In general, these recommendations fell in three categories: "do not use," "use with caution," or "use with no restrictions." Unfortunately, the recommendations were inconsistent across and within entity type (i.e., professional organizations, governmental agencies, and test publishers). Government agencies were unanimous in their recommendation that adapted tele-assessment be conducted with some caution or not at all. Test publishers were unanimous in their recommendation that adapted tele-assessment with their instruments be conducted (either with or without caution). One hypothesis for these findings is that use decisions are, in some part, made based on the financial stake held by each type of entity (Vitell, Singhapakdi, & Thomas, 2001) . For example, test publishers are likely to lose money if no online equivalent to face-to-face testing is available for their products. Therefore, it would make sense that they would recommend adapting instruments for tele-assessment. Second to test publishers in terms of potential financial loss are assessment practitioners. If practitioners cannot provide assessments, they have the potential to lose some or all of their incomes. Possible financial loss may apply more to private practitioners; however, given that the only national, legal requirement for districts to hire school psychologists over other mental health practitioners involves testing (Individuals with Disabilities Education Improvement Act [IDEA], 2004) , this could potentially apply to school-based practitioners as well. Although individual practitioners were not evaluated in this study, the professional organizations that represent these practitioners were included. These organizations must balance the financial/practical needs of their members with the ethical needs of their clients. Therefore, it would make sense that they would recommend using adapted tele-assessments but with caution. Finally, governmental agencies may have more to lose from potential lawsuits resulting in any deviation from established policy and/or regulatory procedures, so it makes sense they would advise against using deviations from standard practice, such as adapted tele-assessment. However, without specific data regarding the rationale behind each recommendation, the reasons for these differences cannot be determined for certainty. Another, more troubling finding from the evaluations of these recommendations is that there appears to be no systematic method of disseminating recommendations to professionals in a time of crisis. Table 3 shows that although many of the proffered recommendations were specific to the pandemic, many were recycled from previous, more generic policy statements. In addition, often there was no single source that individuals could use to find what they needed. Information came from many different sources, such as Web sites, congressional reports, emails, video recordings, blogs, journal articles, and stored documents. This is problematic, especially when the same entity may offer contradictory recommendations within their own publicly available materials (e.g., Alaska DEED). Given this, entities may need to reevaluate the information dissemination portions of their crisis management plans. If such a plan does not exist, then the current writers suggest that one needs to be forthcoming. In 2017, NASP described tele-assessment as, "the least explored area of service in telehealth" (pg. 7). This still holds true in 2020. There is no current research on how many practitioners are using tele-assessment methods nor is there any into how practitioner and/or clients view adapted teleassessment techniques. It is clear that additional study is needed, while social distancing restrictions are present. In addition, if practitioners continue to use adapted tele-assessment methods post-COVID-19 closures, further equivalency studies are a necessity. The field of tele-assessment is relatively new (Brearly et al., 2017) , but it does pre-date COVID-19. Barak was evaluating easily available teleassessment tools as early as 1999 because psychologists have been looking for ways to reach individuals who live a considerable distance from generic and specialized services. Brearly et al. (2017) provided a meta-analysis on methods of providing neuropsychological tele-assessments. Cobb and Sharkey (2007) analyzed different technology modalities to assess individuals with complex motor disabilities. McCreery et al. (2019) investigated game-based technology as a direct method of assessing behaviors that had only previously been tested indirectly. Finally, the possible use of computers as translators (Karpińska, 2017) or as mediation devices for administering translated tests is an enticing concept (Farmer et al., 2020b) . Such studies must examine multiple aspects of equivalency, including: (a) inter-version correlation, (b) mean score differences, (c) score distribution equivalences (American Educational Research Association [AERA] et al., 2014) , (d) sample demographic and setting equivalencies (Grosch et al., 2011; Henke, 2020; Hodge et al., 2019; Krach et al., 2020) , and include (e) a large enough sample size (Cohen, 1988; Farmer et al., 2020a) . Currently, no published studies adequately meet these criteria. In addition to psychometric equivalency, consequential equivalency/validity should also be evaluated. Consequential equivalency would examine if the score variability between administration formats contributes to different diagnostic conclusions (Matuszak & Piasecki, 2012; Ruskin et al., 1998) . Unfortunately, there was not enough statistical power available to accurately run a chi-square analysis with this sample, so no statistical significance results are available. However, given the null sets provided by both test publishers and government agencies, it is possible that having a larger sample of groups would still result in violations in chi-square assumptions. Also problematic is the in-flux nature of adapted tele-assessment recommendations. The current analysis provides a snapshot of how entities responded during the early months of the crisis. It is possible/probable that the recommendations evaluated in this study have already been replaced by new guidelines as of the print date for this article. Problems associated with changing recommendations are compounded by the fact that the field of psychology does not have a singular distribution method for crisis information. One final concern about this study is that the recommendations here are specifically focused on assessments traditionally conducted by clinical and school psychologists. It is possible that these findings may differ for other telehealth areas (e.g., counseling and consultation) or other teleassessment methods (e.g., career counseling and neuropsychological assessment). Given the perceived rarity of the need for tele-assessment services, any push for adapted teleassessment research was minimal prior to COVID-19. Tele-assessment has been a convenient way to provide assessments at reduced cost to individuals who may not have traditional access to psychological diagnostic options (National Association of School Psychologists, 2017; Luxton, Pruitt, & Osenback, 2014) , including rural or homebound clients (Carlbring et al., 2007) . Teleassessment has also been an expedient way to provide services by specialized practitioners (e.g., bilingual psychologists or autism specialists) who are located some distance from clients. These were considered useful but inessential tools for practitioners, until the need naturally exploded when most direct assessments could no longer safely be administered in a face-to-face manner due to the global pandemic. The ensuing research gap has resulted in the widespread use of adapted tele-assessment, but the latter's lack of standardization potentially renders invalid any results from such assessment. At the time of this writing, COVID-19 is still considered a public crisis. Decisions by school districts and individual psychologists regarding adapted tele-assessment are ongoing and everchanging. It is no easy task for practitioners to make decisions regarding tele-assessment, given that there is no singular directive providing a clear map for adapted tele-assessment practice. This lack of clear direction puts both practitioners and the general public at risk. When practitioners base their practice on not using adapted tele-assessments, clients who need immediate services may not be identified in a timely manner (Individuals with Disabilities Education Improvement Act [IDEA], 2004) . On the other hand, practitioners who follow guidance to conduct adapted tele-assessments with no restrictions may risk making inaccurate client diagnoses due to faulty assessment data (National Association of School Psychologists, 2017, 2020). These opposing needs apply undue stress to the mental health and special education systems. In the end, individual practitioners will have to follow their own understanding of professional ethics and best practice, as well as any legal mandates of their governing bodies, while the field awaits future and more consistent recommendations. Given that entities provided different recommendations across different formats, the main findings from this study are mostly historical in nature. These findings indicate the need for psychologists to speak in a singular voice using an easily identifiable/discoverable platform when crises occur. It is clear that our field was not prepared for this type of crisis; we had no plan in place for what to do. More importantly, we did not have a plan in place to disseminate urgent/emergency information. The fields of psychology and school psychology need to expand their emergency plans to include: (1) unified standards for making decisions regarding adapted tele-assessments and (2) a unified dissemination plan for any crisis-generated recommendations going forward. Hopefully, this study will provide information to future leaders for use when developing such a plan. Guidance for special education personnel Special education frequently asked questions COVID-19 considerations for special education Alaska special education COVID-19 district guidance A practical guide to providing telepsychology with minimal risks National Register of Health Service Psychologists American Psychiatric Association Best practices in videoconferencing-based telemental health Ethical principles of psychologists and code of conduct COVID-19: What is APHA Doing? RE: Special education evaluations & virtual assessment Psychological applications on the internet: A discipline on the threshold of a new millennium Psychiatry's new manual (DSM-5): Ethical and conceptual dimensions: Table 1 Neuropsychological test administration by videoconference: A systematic review and meta-analysis Position paper: Mandated special education assessment during the COVID-19 shutdown Internet vs. paper and pencil administration of questionnaires commonly used in panic/agoraphobia research The initial digital adaptation of the WAIS-IV Medicare telemedicine health care provider fact sheet: Medicare coverage and payment of virtual services A decade of research and development in disability, virtual reality and associated technologies: Review of ICDVRAT 1996-2006 Statistical power analysis for the behavioral sciences 50-state emergency teletherapy practice rules survey for counselors, MFTs, psychologists, and clinical social workers Equivalence of Q-interactive administered cognitive tasks: CVLT ® -II and selected D-KEFS ® subtests. (Q-interactive technical report 3) Equivalence of Q-interactive administered cognitive tasks: WAIS-IV. (Q-interactive technical report 1) Equivalence of Q-interactive and paper scoring of academic tasks: Selected WIAT ® -III subtests. (Q-interactive technical report 5) Equivalence of Q-interactive and paper administrations of cognitive tasks: Selected NEPSY ® -II and CMS subtests. (Q-interactive technical report 4) Equivalence of Q-interactive and paper administration of WMS ® -IV cognitive tasks. (Q-interactive technical report 6) Equivalence of Q-interactive ® and paper administrations of language tasks: Selected CELF ® -5 tests Q-interactive equivalence of Q-interactive ® and paper administrations of cognitive tasks Telehealth and telemedicine: Frequently asked questions (No. R46239) Conducting psychoeducational assessments during the COVID-19 crisis: The danger of good intentions Teleassessment with children and adolescents during the coronavirus (COVID-19) pandemic and beyond: Practice and policy implications Initial practice recommendations for teleneuropsychology Telepsychiatry practice guidelines Guidelines for administration via telepractice Need for and steps toward a clinical guideline for the telemental healthcare of children and adolescents Agreement between telehealth and face-to-face assessment of intellectual ability in children with specific learning disorder COVID-19 virtual supports/teletherapy quick guide Joint Task Force for the Development of Telepsychology Guidelines for Psychologists An exploration of supervision delivered via clinical video telehealth (CVT). Training and Education in Professional Psychology Computer aided translation -possibilities, limitations and changes in the field of professional translation A handbook of test construction (psychology revivals): Introduction to psychometric design Independent evaluation of Q-Interactive: A paper equivalency comparison using the PPVT-4 with preschoolers Computer-based administration, scoring, and report writing Timelines and documentation during extended school closures for students with disabilities COVID-19 telemedicine/telehealth facilitation by licensed mental health practitioners Position statement: School psychological service delivery during COVID-19 pandemic [Google Document Louisiana telepsychology guidelines Outbreak of COVID-19 -an urgent need for good science to silence our fears? Best practices for remote psychological assessment via telehealth technologies Can video games be used as a stealth assessment of aggression? The chi-square test of independence COVID-19 continuity of learning for Maryland Public Schools: Special education FAQ Inter-rater reliability in psychiatric diagnosis: Collateral data improves the reliability of diagnoses Updates and information in response to COVID-19 (Coronavirus) RE: Virtual service delivery during COVID-19 closures Re-opening Montana's schools 2020 COVID-19 special education information Technical manual for centers. Colorado Springs, CO: Gibson Institute of Cognitive Research. National Association of School Psychologists Q and A for providing special education and early intervention during coronavirus school closure Partnering with you: Using telehealth for psychological assessment ). Special education and preschool early intervention evaluations & virtual assessment guidance How psychological telehealth can alleviate society's mental health burden: A literature review Public School of North Carolina Reliability and acceptability of psychiatric diagnosis via telecommunication and audiovisual technology A primer on statistics and psychometrics Considerations and concerns of remote assessment South Carolina Association of School Psychologists South Carolina Board of Examiners in Psychology School psychologist evaluation checklist during COVID-19 closures School closures and special education: Guidance on services to students with disabilities Fact sheet: Addressing the risk of COVID-19 in schools while protecting the civil rights of students United States Department of Veteran Affairs VA expands telehealth by allowing health care providers to treat patients across state lines Medicare and coronavirus: What you need to know Special education and student services (SESS) frequently asked questions Consumer ethics: An application and empirical testing of the Hunt-Vitell theory of ethics Tele-Assessment: What you Need to know WSPA response to extended school closure and evaluation timelines due to COVID-19 Telehealth transformation: COVID-19 and the rise of virtual care Statement on tele-assessment Equivalence of remote, online administration and traditional, face-to-face administration of the reynolds intellectual assessment scales-second edition (Online white paper) Equivalence of remote, online administration and traditional, face-to-face administration of Woodcock-Johnson IV cognitive and achievement tests Equivalence of remote, digital administration and traditional, in-person administration of the Wechsler Intelligence Scale for Children, Fifth Edition (WISC-V) Guidance on psychological teleassessment during the COVID-19 crisis Guidance regarding practice of telepsychology by psychologists (2020) The authors would like to acknowledge the work done by the Technology Intervention and Assessment in Schools (TIAS) at Florida State University (from redacted) as well as the work of the NASP: Distance Learning in Graduate Education subgroup.The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. The authors received no financial support for the research, authorship, and/or publication of this article. S. Kathleen Krach  https://orcid.org/0000-0002-6853-379X