key: cord-0253173-e65u9p0r authors: Yeatman, Jason D.; Tang, Kenny An; Donnelly, Patrick M.; Yablonski, Maya; Ramamurthy, Mahalakshmi; Karipidis, Iliana I.; Caffarra, Sendy; Takada, Megumi E.; Ben-Shachar, Michal; Domingue, Benjamin W. title: Measuring reading ability in the web-browser with a lexical decision task date: 2020-08-10 journal: bioRxiv DOI: 10.1101/2020.07.30.229658 sha: 51117430ba15bf59cb9ed93e938df24a26a00db7 doc_id: 253173 cord_uid: e65u9p0r An accurate model of the factors that contribute to individual differences in reading ability depends on data collection in large, diverse and representative samples of research participants. However, that is rarely feasible due to the constraints imposed by standardized measures of reading ability which require test administration by trained clinicians or researchers. Here we explore whether a simple, two-alternative forced choice, time limited lexical decision task (LDT), self-delivered through the web-browser, can serve as an accurate and reliable measure of reading ability. We found that performance on the LDT is highly correlated with scores on standardized measures of reading ability such as the Woodcock-Johnson Letter Word Identification test administered in the lab (r = 0.91, disattenuated r = 0.94). Importantly, the LDT reading ability measure is highly reliable (r = 0.97). After optimizing the list of words and pseudowords based on item response theory, we found that a short experiment with 76 trials (2-3 minutes) provides a reliable (r = 0.95) measure of reading ability. Thus, the self-administered, Rapid Online Assessment of Reading ability (ROAR) developed here overcomes the constraints of resource-intensive, in-person reading assessment, and provides an efficient and automated tool for effective online research into the mechanisms of reading (dis)ability. Why is learning to read almost effortless for some children and a continuous struggle for others? Understanding the mechanisms that underlie individual differences in reading abilities, and the factors that contribute to reading disabilities, is an important scientific challenge with far reaching practical implications. There is near consensus on the importance of foundational skills like phonological awareness for reading development (1) (2) (3) . However, there are a myriad of other sensory, cognitive and linguistic abilities whose link to reading is fiercely debated (4) (5) (6) (7) (8) (9) (10) (11) . For example, hundreds of papers, published over the course of four decades, have debated the causal link between deficits in the magnocellular visual pathway and developmental dyslexia (6, (12) (13) (14) (15) (16) (17) (18) . Similarly contentious is the link between rapid auditory processing and developmental dyslexia (19) (20) (21) (22) (23) . More recently, the role of various general learning mechanisms (e.g., statistical learning, sensory adaptation) in reading development has become a topic of great interest (24) (25) (26) (27) (28) . One reason these debates persist is that most findings reflect data from relatively small samples collected in a single laboratory and, therefore, findings are not necessarily representative of the population (29) (30) (31) . There are two major factors that limit the feasibility of collecting data in large, diverse, representative samples. First, experiments measuring specific sensory and cognitive processes typically depend on software installed on computers in a laboratory. Second, standardized measures of reading ability typically require a trained test administrator to administer the test to each research participant. These two factors: (a) make it time consuming and costly to recruit and test large samples, (b) prohibit including research subjects that are more than a short drive from a university, (c) bias samples towards university communities and (d) create major barriers for the inclusion of under-represented groups. Over the past decade, advances in software packages for running experiments through the web-browser have catalyzed many areas of psychological research to pursue larger, more diverse and representative samples. For example, software packages such as jsPsych (32, 33) , Psychopy (34) , Gorilla (35) and others, make it feasible to present stimuli and record behavioral responses in the web-browser with temporal precision that rivals many popular software platforms for collecting data in the laboratory (36) . However, online experiments have yet to make major headway in the developmental and educational spheres, specifically in reading and dyslexia research. Standardized testing of reading ability still depends on individually administered tests. Thus, even though a researcher might be able to accurately measure processing speed or visual motion perception in thousands of subjects through the web-browser, they would still need to individually administer standardized reading assessments to each participant. Our goal here was to develop an accurate, reliable, expedient and automated measure of single word reading ability that could be delivered through the web-browser. Specifically, we sought to design an experiment to approximate scores on the Woodcock-Johnson Letter Word Identification (WJ-Word-ID) test (37) . The WJ-Word-ID is one of the most widely used standardized measures of reading ability in research (and practice). Like other test batteries that contain tests of single word reading ability (e.g., The NIH Toolbox (38) , Wide Range Achievement Test (39) , Wechsler Individual Achievement Test (40) ), the WJ-Word-ID requires that an experimenter or clinician present each research participant with single printed words for them to read out loud. Participant responses are manually scored by the test administrator as correct or incorrect based on accepted rules of pronunciation. The words are organized in increasing difficulty such that the number of words that a participant reads correctly is also a measure of the difficulty of words that the participant can read. Due to excellent psychometric properties, and widely accepted validity, the Woodcock-Johnson and a compendium of similar tests are at the foundation of thousands of scientific studies of reading development. In considering the myriad of tasks that might produce a suitable browser-based measure of reading ability, the lexical decision task (LDT) is a good candidate for both theoretical and practical reasons. The LDT has a rich history in the cognitive science literature as means to probe the cognitive processes underlying visual word recognition. It is broadly assumed that many of the same underlying cognitive processes are at play when participants make a decision during a two-alternative forced choice (2AFC) lexical decision as during other word recognition tasks (e.g. naming (41, 42) ). Unlike naming or reading aloud, the LDT can be: (a) scored automatically without reliance on (still imperfect) speech recognition algorithms, (b) completed in group (or public) settings such as a classroom and (c) administered quickly as each response only takes, at most, 1-2 seconds. There is empirical support for LDT performance being linked to reading ability. For example, Martens and de Jong demonstrated that response times (RT) on the LDT differed between children with dyslexia compared to children with typical reading skills (43) . Specifically, they found larger word-length and lexicality effects in children with dyslexia compared to children with typical reading skills. These effects seem to track reading ability, as opposed to being a specific marker of dyslexia: reading matched controls, i.e., younger children matched to the children with dyslexia in terms of raw reading scores, showed similar word-length and lexicality effects as found in the older children with dyslexia. Similar to our goal here, Katz and colleagues examined the LDT in combination with a Naming Task, as predictors of standardized reading measures (44) . They found that average response times on the LDT (when combined with the Naming Task) correlated with a variety of reading ability composite scores, including the Woodcock Johnson Basic Reading Skills (44) . However, their study primarily looked at young adults (median age of 21.5 years), and focused on RT as the independent variable, since most participants were at ceiling in terms of accuracy. It is not clear how well these effects would generalize to children learning to read. Moreover, effect sizes were relatively small, suggesting that RT on the LDT may not be the best predictor of reading ability, or that the range of stimuli was not wide enough. Here we sought to develop an implementation of the LDT in the web-browser, along with a suitable list of words and pseudowords, to efficiently and accurately measure reading ability in a broader age group, spanning first grade through young adulthood. The present work (a) reports the results from a first study in which we developed, validated and optimized a browser-based LDT-based measure of reading ability ( Study 1 ) and (b) defines the plan for a second larger-scale norming study ( Study 2, pre-registration: https://osf.io/ktu92/ ). In Study 1 we employed a long list of words and pseudowords, spanning a broad range of lexical and orthographic properties. The goal of Study 1 was to, first, assess the feasibility of using a browser-based LDT to estimate reading ability and, second, leverage the item level data in combination with item response theory (IRT) to construct a collection of short-form tests with optimal psychometric properties. Study 2 is a pre-registered data collection plan to norm the optimized, browser-based assessment in a large and diverse sample of elementary school children. To examine whether response times (RT) recorded in the online LDT replicated classic findings in the literature we fit linear mixed effects models to log transformed RT data for all correct trials. Each model included random intercepts for each word and subject. We examined five different models containing fixed effects of (a) lexical status (real word vs. pseudoword), (b) log transformed lexical frequency (real words only), (c) log transformed bigram frequency (for pseudowords only), (d) word length and (e) the interaction effect of word length and participant reading ability as measured with the WJ-Word-ID. As expected, we found: (a) RT was longer for pseudowords compared to real words ( = 0.11, SE = 0.00648, t = -17.245) (similar to (45) ); (b) responses to real words were faster with increasing lexical frequency ( = -0.021, SE = 0.0013, t = -15.74) (similar to (45) (46) (47) ); (c) responses to pseudowords were slower with increasing bigram frequency ( = 0.035, SE = 0.0057, t = 6.13) (similar to (45, 48, 49) ); (d) responses to words and pseudowords were slower with increasing stimulus length ( = 0.032, SE = 0.0038, 3.3% increase per letter, t= 8.5) (similar to (45) ); and (e) there was a significant length by reading ability interaction ( = -0.008, SE = 0.00195, t = 4.05) indicating that the effect of word length on reading speed diminishes as reading ability improves (similar to (43, 45) ). Stimuli for the LDT were chosen to span a broad range of lexical and orthographic properties ( Supplementary Figure 1 ) , with the aim of providing a sufficiently large difficulty range for detecting associations between accuracy on the LDT and reading ability measured with the WJ-Word-ID. Indeed, overall accuracy, calculated as the percent of correct responses on the 500 LDT trials, was highly correlated with WJ-Word-ID ( Figure 1 left panel , r = 0.91, p < 0.00001, disattenuated r = 0.94). Accuracy for pseudowords (250 trials) had a slightly higher correlation with WJ-Word-ID (r = 0.86, p < 0.00001) than did accuracy for real words (r = 0.74, p < 0.00001), and this difference in correlation approached significance based on William's test (p= 0.08; Figure 1 In contrast, median RT across correct responses in the LDT was not correlated with WJ-Word-ID (r = -0.06, p = 0.69). No correlations were found when median RT was calculated separately for words and pseudowords (r = -0.13, r = -0.08, respectively), even though RTs followed the expected patterns in other analyses (e.g., length effect, see preceding paragraph). However, when the RT analysis was limited to the subset of the subjects who performed above 70% correct (N=28 with WJ-Word-ID scores), a significant negative correlation was found between RT and WJ-Word-ID (r = -0.53, p < 0.005). The correlation between WJ-Word-ID and median RT for real words (r = -0.53, p < 0.005) was similar to the correlation with median RT for pseudowords (r = -0.52, p < 0.005), and this difference was not significant according to William's test (p = 0.79). Thus, response accuracy is a much stronger predictor of reading ability than RT (William's test t =-5.72, p < 0.00005), and RT is only informative in relatively high performing subjects. The impact of participant characteristics on the LDT-WJ correlation was examined through mediation and moderation analyses. The relation between LDT accuracy and WJ-Word-ID raw scores was not moderated by WJ-Word-ID standard scores, age or number of months since testing (t = 0.71, p= 0.48; t = 0.12, p= 0.91; t = 1.24, p = 0.22). These results were further confirmed by a mediation analysis (CI [-0.001, 0.00], p= 0.69; CI [-0.001, 0.00], p= 0.87; CI [-0.0002, 0.00], p = 0.49). Together, these analyses suggest that LDT accuracy is a good approximation of standard reading measures for children with reading profiles ranging from severely impaired to exceptional, and ages between 6 and 18 years. It also suggests that LDT accurately predicts reading ability measured within the past 12 months. Performance across different measures of reading ability is typically highly correlated. Indeed, different tests are frequently used interchangeably in different studies. For example, WJ-Word-ID scores and TOWRE-SWE scores are typically highly correlated (r=0.93 in our sample), even though the former is an untimed test while the latter is a timed test that combines real word reading accuracy and speed. In fact, it is common to use a threshold on either test to group subjects into dyslexic versus control groups in many research studies (21, (50) (51) (52) . The average Pearson correlation between the timed (TOWRE), untimed (WJ), real word (Word-ID and SWE) and pseudoword (Word Attack and PDE) standardized reading measures was r = 0.83 (range: r = 0.72 to r = 0.93; Figure 1 right panel ). Given the correlation between LDT accuracy and WJ-Word-ID, it is not surprising that LDT accuracy was also significantly correlated with all the standardized reading measures ( Figure 1 right panel ) . The correlations between LDT accuracy and Vocab (r = 0.64) and Matrix Reasoning (r = 0.56) were also nearly identical to the correlations between other reading measures and verbal abilities (r = 0.61 to 0.73) and reasoning abilities (r = 0.34 to 0.60, Figure 1 right panel ) . Thus, LDT response accuracy seems a suitable measure of reading ability as it behaves in a similar manner to the other established, standardized reading measures. Next we turn to the challenge of constructing a LDT with suitable psychometric properties for applicability in a research setting. The word list used in Study 1 was constructed to broadly sample different types of words and pseudowords. In doing so, our goal was to use item response theory (IRT) to select the optimal subset of stimuli that measure reading ability reliably and efficiently across the full continuum of reading abilities. Our approach was to remove items that were: (a) not correlated with overall test performance and WJ-Word-ID performance, (b) not well fit by the Rasch model and (c) examine item disrcimination based on the two parameter logistic model (2PL). We then constructed an optimized set of three short stimulus lists, matched in terms of difficulty. Step 1: Remove words that are not predictive of overall performance As a first step, we computed the correlations between item responses, LDT accuracy calculated on the full 500 item test, and WJ-Word-ID scores. Performance on some stimuli, like the pseudoword "insows", was highly correlated with overall test performance (r = 0.61, p < 0.00001) and WJ-Word-ID score (r = 0.60, p = 0.000035). This means that a participant's response to that word was highly informative of their reading ability. Other stimuli, like the pseudoword "timelly" and the real words "napery" and "kind" provided no information about overall test performance, as indicated by correlations near (or below) zero. We removed 71 items that did not surpass a lenient threshold of r = 0.l. Figure 2 shows words that were rejected (red) and retained (blue) based on this criterion. Step 2: Remove words that are not well fit by the Rasch model The Rasch model (1 parameter logistic with a guess rate fixed at 0.5) (53) was fit to the response data for the 429 items that remained after Step 1 for all 118 subjects, using the MIRT package in R (54) . Items were removed if the infit or outfit statistics were outside the range of 0.6 to 1.4, a relatively lenient criterion intended to remove items with unpredictable responses (55) . The model was iteratively refitted until all items met this criterion. The process resulted in the removal of 129 items with response data that did not fit well with the predictions of the Rasch model ( Figure 2 right panel , red items). Step 3: Examining item discrimination with the two parameter logistic model Next we fit the two parameter logistic (2PL) model (guess rate fixed at 0.5) to the response data, for the 300 items that were retained after Step 2. The 2PL model fits item response functions with a difficulty parameter ( b ) as well as a slope parameter ( a ); we sought to remove any remaining items that were inefficient at discriminating different levels of reading ability based on the slope of the item response function. The right panel of Figure 2 shows the item difficulty and discrimination parameters from the 2PL model. We removed 1 item with a low slope ( a < 0.7). Of the remaining 299 items, 114 were real words and 185 were pseudowords. To better understand the lexical and orthographic properties of the items that were selected for the final test, we compared the retained-versus rejected-items based on word length and log frequency of their orthographic neighbors (see Supplementary Figure 2 ). We found a significant effect for word length (Kruskall-Wallis H(3) = 26.6, P < 0.0001): Retained real words were significantly longer than rejected real words (p <0.0001) and retained pseudowords (p = 0.050). We expected that the log frequency of the orthographic neighbors would align with the word length effect reported above (i.e,. the final set of retained words would have less frequent orthographic neighbors compared to the rejected words). Our analysis confirmed this expectation (n = 359, Kruskall-Wallis H(3)=14.5, P =0.0023). Indeed, pairwise comparisons confirmed that retained real words had significantly less frequent orthographic neighbors compared to the rejected real words (p <0.0004). Therefore, real words, but not pseudowords, that are longer and have less frequent orthographic neighbors, are retained as a result of the IRT analysis, indicating their utility for measuring reading ability. Finally, we sought to create an efficient and reliable test that was composed of three short word-lists with the following properties: (a) quick to administer through the web-browser, (b) balanced in number of real words and pseudowords, (c) matched in terms of difficulty and (d) optimally informative across the range of reading abilities. Based on feedback from subjects in Study 1, we surmised that a list of about 80 items was a suitable length for maintaining focus. Thus, out of the 299 items that remained after Steps 1-3, we constructed three lists with items that were equated in difficulty based on b parameters (estimated with the Rasch model). We further ensured that: (a) items spanned the full difficulty range to maximize test information for the lowest and highest performing participants and (b) real and pseudowords were matched in terms of length on each list. Since only 114 real words were retained after the optimization procedure described above, 76-item lists were generated with 38 pseudowords and 38 real words. Figure 3 shows the test information function for each of the three lists. The composite of the three word lists (234 item; scores combined across lists) was extremely reliable with a reliability of 0.97. The composite LDT has a reliability comparable to those of the WJ and TOWRE composite indices: WJ Basic Reading Skills, average reliability = 0.95 (37) , and TOWRE Total Word Reading Efficiency, average reliability = 0.96 (56) . Performance on the individual word lists was also highly reliable, with an ICC of 0.95 ( Figure 3 upper right ; about the same or higher than the individual TOWRE and WJ test forms: WJ Word Attack reliability = 0.90 and WJ Letter Word Identification reliability = 0.94 (37) ; TOWRE Sight Word Efficiency reliability = 0.91 and TOWRE Phonemic Decoding Efficiency reliability = 0.92 (56) . Performance on each individual word list was highly correlated with WJ-Word-ID scores ( Figure 3 bottom panel ; List A: r = 0.88; List B: r = 0.85; List C: r = 0.87) suggesting that administering a single 80-item lexical decision task (~2-3 minutes) would be a suitable measure of reading ability for some research purposes. Figure 2 . Even a single short list provides an accurate and reliable estimate of WJ-Word-ID ability. Our primary goal was to evaluate the suitability of a browser-based lexical decision task as a measure of reading ability. Lexical decision is commonly used to interrogate the mechanisms of word recognition, and previous studies have shown: (a) differences in task performance in dyslexia, (b) changes in task performance over development and (c) relationships to various measures of reading ability. Thus, lexical decision was a strong candidate for an automated measure of reading ability. Results from this first study revealed that lexical-decision accuracy was a very accurate predictor of reading ability as measured by the WJ-Word-ID. Moreover, reliability analysis confirmed that the lexical decision task has excellent psychometric properties. Thus, by selecting the optimal list of words to span the continuum of reading ability, it might be possible to design a quick, valid and reliable measure of reading ability that could be administered to readers of all ages through the web-browser. We used IRT to select individual words with suitable measurement properties and created three short versions of the LDT that were matched in terms of difficulty. In the future, leveraging the IRT analysis to implement the LDT as a computer adaptive test would likely lead to even more efficient and accurate measures of reading ability. Follow-up work will validate and norm the short-form LDT as a standardized measure of reading ability (pre-registration: https://osf.io/ktu92/ ; Methods: Study 2 ) Through the process of optimizing the items to be included in the short-form test, we made four interesting observations that are worth noting (and all the data are publicly available for other researchers who may be interested in further item-level analyses). First, accuracy is a much better predictor of reading ability than RT (at least for children learning to read in English). Previous studies using RT have found statistically significant, but relatively weak relationships between RT and reading ability (44) . Indeed, this finding is replicated in our data (Results, page 4). RT is likely a useful measure for participants at the high end of the reading continuum and, with a larger sample, might be useful for making more fine-grained discriminations of automaticity among skilled readers. Second, pseudowords are better at discriminating different levels of reading ability than are real words (as indicated by higher correlation with reading ability and more pseudowords being retained in the IRT analysis). It is well established that properties of the real words included in a LDT influence responses to pseudowords and vice versa (49, 57, 58) . Third, we investigated the orthographic properties of the items that were selected by the IRT analysis. Retained real words were longer and had lower frequency of orthographic neighbors than removed real words, an effect that was not observed for pseudowords (see Supplementary Figure 2 ). Finally, including items that span a wide range of difficulty ( Figure 2, 3 ) is critical to a lexical-decision based measure of reading ability. Data and analysis code is available at: https://github.com/yeatmanlab/ROAR-LDT . A visual-presentation 2AFC lexical decision task (LDT) was implemented in the Psychopy experiment builder (34) , converted to Javascript, and uploaded to Pavlovia, an online experiment platform (59) . The LDT was split into five blocks, each consisting of 100 trials (50 real words, 50 pseudowords) and each block was introduced as a quest in the magical world of Lexicality. To engage a wide range of subjects, the LDT was embedded in a fun story, animated environment, and a game that involved collecting golden coins. Instructions were narrated by a character in the game and there was a series of 10 practice trials with feedback to ensure that each participant understood the LDT. After a practice trial with an incorrect response, the character would return, explain why the response was incorrect, and remind the participant the rules of the LDT. This ensured that even the youngest participants understood the rules of the LDT and remained engaged through the duration of the task. Prior to the five blocks of the LDT, subjects completed a simple response time (RT) task. This task was used as a baseline measure, in case there was too much noise in the response time data (e.g., due to different devices being used). In each trial of the simple RT task, participants saw a fixation cross, then a triangle or a square flashed briefly for 350 ms, followed by another fixation cross. Participants were instructed to use the keyboard to respond, by pressing "left arrow" to indicate that a triangle was presented, or "right arrow" to indicate that a square was presented. The five blocks of the LDT followed a similar format: a word or pseudoword would flash briefly for 350 ms in the center of the screen, followed by a fixation cross. Participants pressed "right" for a real word or "left" for a pseudoword. Stimuli were presented in the center of the screen and scaled to be 3.5% of the participants' vertical screen dimension. The task was gamified in order to be more engaging for the younger participants. If the participants achieved ten consecutive correct answers, a small, one-second animation would play showing their party of adventurers defeating a group of monsters, with a small "+10 gold" icon appearing on top. After each block, participants unlocked new characters and were shown the amount of gold that they had collected during the last block. 250 real words and 250 pseudowords were selected to span a large range of lexical and orthographic properties, with the goal of finding stimuli spanning a large range of difficulty. The distribution of lexical and orthographic properties of the stimuli is shown in Supplementary Figure 1 . Pseudowords were matched in orthographic properties to real words using Wuggy, a pseudoword generator (60) . Wuggy generates orthographically plausible pseudowords that are matched in terms of sub-syllabic segments, word length, and letter-transition frequencies. Matched real word/pseudoword pairs were kept within the same block, such that each block contained 50 real words and 50 pseudowords roughly matched in terms of orthographic properties. Additionally, 18 real words were paired with 18 orthographically implausible pseudowords to create some easier items to ensure that the LDT would not have floor effects for young children and those with dyslexia (e.g. "wndo"). These orthographically implausible words had low trigram frequencies and can be appreciated from the bump at the lower end of the trigram frequency distribution in the lower panel of Supplementary Figure 1 . database includes children and adults who have (1) enrolled and consented (and/or assented) to being part of a research participant pool, (2) filled out extensive questionnaires on demographics, education history, attitudes towards reading and history/diagnoses of learning disabilities, (3) been validated through a phone screening to ensure accuracy of basic demographic details entered in the database. Many of the children and adults have participated in one or more study at Stanford University or the University of Washington and, as part of these studies, have undergone an in-person assessment session including standardized measures of reading abilities (Woodcock-Johnson IV Tests of Achievement, Test of Word Reading Efficiency -2), verbal abilities (Welschler Abbreviated Scales of Intelligence II (WASI-II), Vocabulary subtest) and general reasoning abilities (WASI-II, Matrix Reasoning subtest). Scores on standardized tests were analyzed if the testing had occurred within 12 months of completing the LDT task (scores were adjusted based on the expected improvement since the time of testing). Participants in the database were emailed a brief description of the study and a link to a digital consent form and the online LDT task. If participants provided consent, their LDT performance was linked to their data in the research database (questionnaires and standardized tests). Participants also had the option to complete the online experiment and remain anonymous. A total of 120 children and adults between the ages of 6.27 years and 29.15 (M = 11.61, SD = 3.67) participated in this study. These subjects spanned the full range of reading abilities as measured by the Woodcock Johnson Basic Reading Skills (WJ-BRS) standard scores (M = 97.7, SD = 18.1, min = 57, max= 142) and TOWRE Index standard scores (M = 92.4, SD = 18.9, min = 55, max= 134). Many subjects were recruited for studies on dyslexia and, based on the WJ-BRS, 25% of the subjects met a typical criterion for dyslexia (standard score more than 1 SD below the population mean). 37.5% of subjects were below the 1 SD cutoff in terms of TOWRE Index. We treat reading ability as a continuous measure throughout as opposed to imposing a threshold that defines a group of participants as individuals with dyslexia. Complete demographic information for all the participants can be found at: https://github.com/yeatmanlab/ROAR-LDT/blob/master/data_allsubs/metadata_all.csv All analysis code and data is publicly available at: https://github.com/yeatmanlab/ROAR-LDT , with a README file that documents how to reproduce each figure and statistic reported in the manuscript. Correct/incorrect responses and log transformed response times for each item were concatenated across subjects into a large table. Two subjects were identified as outliers and removed from further analysis based on the following procedure: First, median response times (RTs) were calculated for each participant. Then, participants were excluded if their median response time was more than 3 standard deviations below the sample mean. The two subjects who met this criterion performed the LDT at near chance accuracy level, suggesting that their extremely fast RTs were indicative of random guessing (lack of experiment compliance). These participants are displayed in the figures as open circles for reference. For the analysis of RT data, responses shorter than 0.2 seconds or longer than 5 seconds were removed. Then, quartiles and the interquartile range (IQR) of the RT distribution were calculated per participant. Responses that were longer than 3 times the IQR from the third quartile, or shorter than 3 times the IQR from the first quartile, were further excluded (see (61, 62) for similar outlier exclusion procedures). These steps resulted in the exclusion of 4.6% of total trials. Analysis of RT data was conducted using the lme4 package in R (63) . The following sections lay out the plan for a pre-registered ( https://osf.io/ktu92/ ), large-scale norming study for the Rapid Online Assessment of Reading (ROAR) browser-based LDT. Collection of normative data is planned to occur after receiving reviews on Study 1 and normative data will be included in the revision of this manuscript. A shorter version of the browser-based LDT (252 trials in Study 2 vs. 500 trials in Study 1 ) was implemented based on the three, optimized 76-item lists that were designed in Study 1 . Each list is presented in a block of trials and the order of the lists is randomized as are the order of the words within each list (but each word was not mixed across lists). Otherwise, all the details of the experiment were maintained from version 1. Twelve new words were selected from grade level text (1st and 2nd grade) in order to include easy vocabulary words for English language learners (ELLs) and young elementary school children. Pseudoword pairs were created for each new real word by rearranging the letters and maintaining orthographic regularities (as in Study 1 ). These new items (4 real words, 4 pseudowords) were added to each list for a final length of 84 items per list. The three stimulus lists for Study 2 are shown in Supplementary Table 1 . N>=600 children between 1st and 6th grade (n >= 100 per grade) will be recruited through partnerships with the local school districts and research participant databases ( http://dyslexia.stanford.edu ). These data will be used to establish norms for the Rapid Online Assessment of Reading (ROAR) browser-based LDT. Out of the full sample, 150 subjects will be administered a battery of standardized tests over Zoom (same tests as Study 1 ). This will allow the scale and norms of the ROAR to be equated to the scale of other tests that are commonly used in research. are not present in illiterate adults. Dyslexia 0 , 1-15 (2018). Dyslexia as a phonological deficit: Evidence and implications The Nature of Phonological Processing and Its Causal Role in the Acquisition of Reading Skills Specific reading disability (dyslexia): What have we learned in the past four decades? From single to multiple deficit models of developmental disorders Individual Prediction of Dyslexia by Single vs. Multiple Deficit Models Dyslexia: a deficit in visuo-spatial attention, not in phonological processing Sensory theories of developmental dyslexia: three challenges for research Developmental dyslexia: Specific phonological deficit or general sensorimotor dysfunction? Developmental dyslexia: The visual attention span deficit hypothesis The causal relationship between dyslexia and motion perception reconsidered Optimizing text for an individual's visual system: The contribution of visual crowding to reading difficulties The current status of the magnocellular theory of developmental dyslexia To see but not to read; the magnocellular theory of dyslexia Specific reading disability: Differences in Contrast Sensitivity as a Function of Spatial Frequency The implausibility of low-level visual deficits as a cause of children's reading difficulties It is the egg, not the chicken; dorsal visual deficits present in dyslexia The magnocellular deficit theory of dyslexia: The evidence from contrast sensitivity The need to differentiate the magnocellular system from the dorsal stream in connection with dyslexia Temporal processing deficits of language-learning impaired children ameliorated by training Auditory temporal perception, phonics, and reading disabilities in children Reading ability and phoneme categorization Categorical phoneme labeling in children with dyslexia does not depend on stimulus duration Auditory processing in dyslexia and specific language impairment: Is there a deficit? What is its nature? Does it explain anything? Perceptual bias reveals slow-updating in autism and fast-forgetting in dyslexia Dyslexia and the failure to form a perceptual anchor Dysfunction of Rapid Neural Adaptation in Dyslexia Impaired Statistical Learning in Developmental Dyslexia The Role of Statistical Learning in Word Reading and Spelling Development: More Questions than Answers The generalizability crisis Power failure: why small sample size undermines the reliability of neuroscience A manifesto for reproducible science jsPsych: a JavaScript library for creating behavioral experiments in a Web browser Measuring sequences of keystrokes with jsPsych: Reliability of response times and interkeystroke intervals Gorilla in our midst: An online behavioral experiment builder The timing mega-study: comparing a range of experiment generators, both lab-based and online NIH Toolbox for Assessment of Neurological and Behavioral Function Wide range achievement test--revision 3 Wechsler individual achievement test--Second UK edition. The Psychological Corporation Visual Word Recognition: The Journey From Features to Meaning (A Travel Update A distributed, developmental model of word recognition and naming The effect of word length on lexical decision in dyslexic and normal reading children What lexical decision and naming tell us about reading Lexicality and stimulus length effects in Italian dyslexics: role of the overadditivity effect Visual word recognition of single-syllable words The British Lexicon Project: lexical decision data for 28,730 monosyllabic and disyllabic English words Word frequncy, repetition, and lexicality effects in word recongition tasks: beyond measures of Central Tendency A Diffusion Model Account of the Lexical Decision Task Disruption of posterior brain systems for reading in children with developmental dyslexia Word selectivity in high-level visual cortex and reading skill Learning disabilities: From identification to intervention Probabilistic models for some intelligence and attainment tests Others, mirt: A multidimensional item response theory package for the R environment Properties of Rasch residual fit statistics Test of word reading efficiency Cross-task strategic effects How lexical decision is affected by recent experience: symmetric versus asymmetric frequency-blocking effects PsychoPy2: Experiments in behavior made easy Wuggy: a multilingual pseudoword generator Morpho-orthographic segmentation without semantics Comparing word processing times in naming, lexical decision, and progressive demasking: evidence from chronolex Fitting Linear Mixed-Effects Models using We would like to thank the Pavlovia and PsychoPy team for their support on the browser-based experiments. This work was funded by NIH NICHD R01HD09586101, research grants from Microsoft and Jacobs Foundation Research Fellowship to J.D.Y. (BLUE) and rejected real words (RED) from step 3 of the IRT analysis. We found that the retained real words were longer than rejected real words (p <0.0001 *** ) and retained pseudowords (p =0.050 * ). Bottom Panel (left: real words; right: pseudowords): Pairwise comparisons revealed that retained real words were significantly lower in log frequency of its orthographic neighbors compared to the rejected real words (p <0.0004 ** ). These analyses show that the more unique a word is (longer and with low-frequency neighbors) the higher its discriminative power of reading performances. Darker colors are pseudowords and lighter colors are real words.