TESTING INFORMATION LITERACY IN DIGITAL ENvIRONMENTS | KATz 3 Despite coming of age with the Internet and other tech- nology, many college students lack the information and communication technology (ICT) literacy skills neces- sary to navigate, evaluate, and use the overabundance of information available today. This paper describes the development and early administrations of ETS’s iSkills assessment, an Internet-based assessment of informa- tion literacy skills that arise in the context of technology. From the earliest stages to the present, the library com- munity has been directly involved in the design, develop- ment, review, field trials, and administration to ensure the assessment and scores are valid, reliable, authentic, and useful. T echnology is the portal through which we interact with information, but there is growing belief that people’s ability to handle information—to solve problems and think critically about information—tells us more about their future success than does their knowledge of specific hardware or software. These skills—known as information and communications technology (ICT) literacy—comprise a twenty­first­century form of literacy in which researching and communicating information via digital environments are as important as reading and writing were in earlier centuries (Partnership for 21st Century Skills 2003). Although today’s knowledge society challenges stu­ dents with overabundant information of often dubious quality, higher education has recognized that the solution cannot be limited to improving technology instruction. Instead, there is an increasingly urgent need for students to have stronger information literacy skills—to “be able to recognize when information is needed and have the ability to locate, evaluate, and use effectively the needed information” (American Library Association 1989)—and apply those skills in the context of technology. Regional accreditation agencies have integrated information lit­ eracy into their standards and requirements (for example, Middle States Commission on Higher Education 2003; Western Association of Schools and Colleges 2001), and several colleges have begun campuswide initiatives to improve the information literacy of their students (for example, The California State University 2006; University of Central Florida 2006). However, a key challenge to designing and implementing effective information lit­ eracy instruction is the development of reliable and valid assessments. Without effective assessment, it is difficult to know if instructional programs are paying off—whether students’ information literacy skills are improving. ICT literacy skills are an issue of national and inter­ national concern as well. In January 2001, Educational Testing Service (ETS) convened an International ICT Literacy Panel to study the growing importance of exist­ ing and emerging information and communication tech­ nologies and their relationship to literacy. The results of the panel’s deliberations over fifteen months highlighted the growing importance of ICT literacy in academia, the workplace, and society. The panel called for assessments that will make it possible to determine to what extent young adults have obtained the combination of techni­ cal and cognitive skills needed to be productive mem­ bers of an information­rich, technology­based society (International ICT Literacy Panel 2002). This article describes ETS’s iSkills assessment (for­ merly “ICT Literacy Assessment”), an Internet­based assessment of information literacy skills that arise in the context of technology. From the earliest stages to the pres­ ent, the library community has been directly involved in the design, development, review, field trials, and admin­ istration to ensure the assessment and scores are valid, reliable, authentic, and useful. ■ Motivated by the library community Although the results of the International ICT Literacy Panel provided recommendations and a framework for an assessment, the inspiration for the current iSkills assessment came more directly from the higher educa­ tion and library community. For many years, faculty and administrators at the California State University (CSU) had been investigating issues of information literacy on their campuses. As part of their systemwide Information Competence Initiative that began in 1995, researchers at CSU undertook a massive ethnographic study to observe students’ research skills. The results suggested a great many shortcomings in students’ infor­ mation literacy skills, which confirmed librarian and classroom faculty anecdotal reports. However, clearly such a massive data collection and analysis effort would be unfeasible for documenting the information literacy skills of students throughout the CSU system (Dunn 2002). Gordon Smith and the late Ilene Rockman, both of the CSU Chancellor ’s office, discussed with ETS the idea of developing an assessment of ICT literacy that could support CSU’s Information Competence Initiative as well as similar initiatives throughout the higher edu­ cation community. Irvin R. Katz Irvin R. Katz (ikatz@ets.org) is Senior Research Scientist in the Research and Development Division at Educational Testing Service. Testing Information Literacy in Digital Environments: ETS’s iSkills Assessment � INFORMATION TECHNOLOGY AND LIBRARIES | SEpTEMBER 2007� INFORMATION TECHNOLOGY AND LIBRARIES | SEpTEMBER 2007 ■ National higher education ICT literacy initiative In August 2003, ETS established the National Higher Education ICT Literacy Initiative, a consortium of seven colleges and universities that recognized the need for an ICT literacy assessment targeted at higher educa­ tion. Representatives of these institutions collaborated with ETS staff to design and develop the iSkills assessment. The consortium built upon the work of the International Panel to explicate the nature of ICT literacy in higher education. Over the ensuing months, repre­ sentatives of consortium institutions served as subject­ matter experts for the assessment design and scoring implementation. The development of the assessment followed a process known as Evidence­Centered Design (Mislevy, Steinberg, and Almond 2003), a systematic approach to the design of assessments that focuses on the evidence (student performance and products) of proficiencies as the basis for constructing assessment tasks. Through the Evidence­ Centered Design process, ETS staff (psychometricians, cognitive psychologists, and test developers) and sub­ ject­matter experts (librarians and faculty) designed the assessment by considering first the purpose of the assess­ ment and by defining the construct—the knowledge and skills to be assessed. These decisions drove discussions of the types of behaviors, or performance indicators, to serve as evidence of student proficiency. Finally, simulation­ based tasks designed around authentic scenarios were crafted to elicit from students the critical performance indicators. Katz et al. (2004) and Brasley (2006) provide a detailed account of this design and development process, illustrating the critical role played by librarians and other faculty from higher education. ■ ICT literacy = information literacy + digital environments Consortium members agreed with the conclusions of the International ICT Literacy Panel that ICT literacy must be defined as more than technology literacy. College students who grew up with the Internet (the “Net Generation”) might be impressively technologically literate, more accepting of new technology, and more technically facile than their parents and instructors (Oblinger and Oblinger 2005). However, anecdotally and in small­scale studies, there is increasing evidence that students do not use technology effectively when they conduct research or communicate (Rockman 2004). Many educators believe that students today are less information savvy than earlier generations despite having powerful information tools at their disposal (Breivik 2005). ICT literacy must bridge the ideas of information literacy and technology literacy. To do so, ICT literacy draws out the technology­related components of infor­ mation literacy as specified in the often­cited standards of the Association of College and Research Libraries (ACRL) (American Library Association 1989), focusing on how students locate, organize, and communicate information within digital environments (Katz 2005). This conflu­ ence of information and technology directly reflects the “new illiteracy” concerns of educators: students quickly adopt new technology, but do not similarly acquire skills for being critical consumers and ethical producers of information (Rockman 2002). Students need training and practice in ICT literacy skills, whether through general education or within discipline coursework (Rockman 2004). The definition of ICT literacy adopted by the con­ sortium members reflects this view of ICT literacy as information literacy needed to function in a technological society: ICT literacy is the ability to appropriately use digital technology, communication tools, and/or networks to solve information problems in order to function in an information society. This includes having the ability to use technology as a tool to research, organize, and communicate information and having a fundamental understanding of the ethical/legal issues surrounding accessing and using information (Katz et al. 2004, 7). Consortium members further refined this defini­ tion, identifying seven performance areas (see figure 1). These areas mirror the ACRL standards and other related standards, but focus on elements that were judged most central to being sufficiently information literate to meet the challenges posed by technology. ■ ETS’s iSkills Assessment ETS’s iSkills assessment is an Internet­delivered assess­ ment that measures students’ abilities to research, orga­ nize, and communicate information using technology. The assessment focuses on the cognitive problem­solving and critical­thinking skills associated with using technol­ ogy to handle information. As such, scoring algorithms target cognitive decision­making rather than technical competencies. The assessment measures ICT literacy through the seven performance areas identified by con­ sortium members, which represent important problem­ solving and critical­thinking aspects of ICT literacy skill (see figure 1). Assessment administration takes approx­ imately seventy­five minutes, divided into two sec­ tions lasting thirty­five and forty minutes, respectively. ARTICLE TITLE | AUTHOR 5TESTING INFORMATION LITERACY IN DIGITAL ENvIRONMENTS | KATz 5 Figure 1. Components of ICT literacy Define: Understand and articulate the scope of an information problem in order to facilitate the electronic search for information, such as by: ■ distinguishing a clear, concise, and topical research question from poorly framed questions, such as ones that are overly broad or do not otherwise fulfill the information need; ■ asking questions of a “professor” that help disambiguate a vague research assignment; and ■ conducting effective preliminary information searches to help frame a research statement. Access: Collect and/or retrieve information in digital environments. Information sources might be Web pages, databases, discussion groups, e-mail, or online descriptions of print media. Tasks include: ■ generating and combining search terms (keywords) to satisfy the requirements of a particular research task; ■ efficiently browsing one or more resources to locate pertinent information; and ■ deciding what types of resources might yield the most useful information for a particular need. Evaluate: Judge whether information satisfies an information problem by determining authority, bias, timeliness, relevance, and other aspects of materials. Tasks include: ■ judging the relative usefulness of provided Web pages and online journal articles; ■ evaluating whether a database contains appropriately current and pertinent information; and ■ deciding the extent to which a collection of resources sufficiently covers a research area. Manage: Organize information to help you or others find it later, such as by: ■ categorizing e-mails into appropriate folders based on a critical view of the e-mails’ contents; ■ arranging personnel information into an organizational chart; and ■ sorting files, e-mails, or database returns to clarify clusters of related information. Integrate: Interpret and represent information, such as by using digital tools to synthesize, summarize, compare, and contrast information from multiple sources while: ■ comparing advertisements, e-mails, or Web sites from competing vendors by summarizing information into a table; ■ summarizing and synthesizing information from a variety of types of sources according to specific criteria in order to compare information and make a decision; and ■ re-representing results from an academic or sports tournament into a spreadsheet to clarify standings and decide the need for playoffs. Create: Adapt, apply, design, or construct information in digital environments, such as by: ■ editing and formatting a document according to a set of editorial specifications; ■ creating a presentation slide to support a position on a controversial topic; and ■ creating a data display to clarify the relationship between academic and economic variables. Communicate: Disseminate information tailored to a particular audience in an effective digital format, such as by: ■ formatting a document to make it more useful to a particular group; ■ transforming an e-mail into a succinct presentation to meet an audience’s needs; ■ selecting and organizing slides for distinct presentations to different audiences; and ■ designing a flyer to advertise to a distinct group of users. © 2007 Educational Testing Service. All rights reserved. 6 INFORMATION TECHNOLOGY AND LIBRARIES | SEpTEMBER 20076 INFORMATION TECHNOLOGY AND LIBRARIES | SEpTEMBER 2007 During this time, students respond to fifteen interactive, performance­based tasks. Each interactive task presents a real­world scenario, such as a class or work assignment, that frames the infor­ mation problem. Students solve information­handling tasks in the context of simulated software (for example, e­mail, Web browser, library database) having the look and feel of typical applications. There are fourteen three­ to five­minute tasks and one fifteen­minute task. The three­ to five­minute tasks target a single perfor­ mance area, while the fifteen­minute tasks comprise more complex problem­solving scenarios that target multiple performance areas. The simpler tasks contribute to the overall reliability of the assessment, while the more com­ plex task focuses on the richer aspects of ICT literacy performance. In the assessment, a student might encounter a sce­ nario that requires him or her to access information from a database using a search engine (see figure 2). The results are tracked and strategies scored based on how he or she searches for information, such as key words chosen, search strategies refined, and how well the information returned meets the needs of the task. The assessment tasks each contain mechanisms to keep students from pursuing unproductive actions in the simulated environment. For example, in an Internet browsing task, when the student clicks on an incorrect link, he might be told that the link is not needed for the current task. This message cues the student to try an alter­ native approach while still noting for scoring purposes that the student made a misstep. In a similar way, the student who fails to find useful (or any) journal articles in her database search might receive an instant message from a “teammate” providing her with a set of journal articles to be evaluated. These mechanisms potentially keep students from becoming frustrated (for example, via a fruitless search) while providing the opportunity for the students to demonstrate other aspects of their skills (for example, evaluation skills). The scoring for the iSkills assessment is completely automated. Unlike a multiple­choice question, each simu­ lation­based task provides many opportunities to collect information about a student and allows for alternative paths leading to a solution. Scored responses are pro­ duced for each part of a task, and a student’s overall score on the test accumulates the individual scored responses across all assessment tasks. The assessment differs from existing measures in sev­ eral ways. As a large­scale measure, it was designed to be administered and scored across units of an institution or across institutions. As a simulation­based assessment, the tasks go beyond what is possible in multiple­choice format, providing students with the look and feel of interactive digital environments along with tasks that elicit higher­order critical­thinking and problem­solving skills. As a scenario­based assessment, students become engaged in the world of the tasks, and the task scenarios describe the types of assignments students should be see­ ing in their ICT literacy instruction as well as examples of workplace and personal information problems. ■ Two levels of assessments The iSkills assessment is offered at two levels: core and advanced. The core level was designed to assess readi­ ness for the ICT literacy demands of college. It is targeted at high school seniors and first­year college students. The advanced level was designed to assess readiness for the ICT literacy challenges in transitioning to higher­level college coursework, such as moving from sophomore to junior year or transferring from a two­year to a four­year institution. The advanced level targets students in their second or third year of post­secondary study. The key difference between the core and advanced levels is in the difficulty of the assessment tasks. Tasks in the core level are designed to be easier; examinees are presented with fewer options, the scenarios are more straightforward, and the reasoning needed for each step in a task is simpler. An advanced task might require an individual to infer the search terms needed from a gen­ eral description of an information need; the correspond­ ing core task would state the information need more explicitly. In a task of evaluating Web sites, the core level might present a Web site with many clues that it is not Figure 2. In the iSkills assessment, students demonstrate their skills at handling information through interaction with simulated software. In this example task, students develop a search query as part of a research assignment on earthquakes. © 2007 Educational Testing Service. All rights reserved. ARTICLE TITLE | AUTHOR 7TESTING INFORMATION LITERACY IN DIGITAL ENvIRONMENTS | KATz 7 authoritative (a “.com” URL, unprofessional look, content that directly describes the authors as students). The cor­ responding advanced task would present fewer cues of the Web site’s origin (for example, a professional look, but careful reading reveals the Web site is by students). ■ Score reports for individuals and institutions Both levels of the assessment feature online delivery of score reports for individuals and for institutions. The individual score report is intended to help guide students in their learning of ICT literacy skills, aiding identifica­ tion of students who might need additional ICT literacy instruction. The report includes an overall ICT literacy score, a percentile score, and individualized feedback on the student’s performance (see figure 3). The percentile compares students to a reference group of students who took the test in early 2006 and who fall within the target population for the assessment level (core or advanced). As more data are collected from a greater number of institutions, these reference groups will be updated and, ideally, approach nationally representative norms. Score reports are available online to students, usually within one week. High schools, colleges, and universities receive score reports that aggregate results from the test­takers at their institution. The purpose of the reports is to provide an overview of the students in comparison with a reference group. These reports are available to institutions online after at least fifty students have taken either the core or advanced level test—that is, when there are sufficient num­ bers to allow reporting of reliable scores. Figure 4 shows a graph from one type of institutional report. Users have the option to specify the reference group (for example, all students, all students at a four­year institution) and the subset of test­takers to compare to that group (for exam­ ple, freshmen, students taking the test within a particular timeframe). A second report summarizes the performance feedback of the individual reports, providing percentages of students who received the highest score on each aspect of performance (each of the fourteen short tasks are scored on two or three different elements). Finally, institutions can conduct their own analyses by downloading the data of their test­takers, which include each student’s responses to the background questions, iSkills score, and responses to institution­specified questions. ■ Testing the test A variety of specialists contributed to the development of ETS’s iSkills assessment: librarians, classroom fac­ ulty, education administrators, assessment specialists, researchers, user­interface and graphic designers, and systems developers. The team’s combined goal was to produce a valid, reliable, authentic assessment of ICT literacy skills. Before the iSkills assessment produced Figure 3. First page of a sample score report for an individual. The subsequent pages contain additional performance feedback. Figure 4. Sample portion of an institutional score report: compari- son between a user-specified reference group and data from the user’s institution. © 2007 Educational Testing Service. All rights reserved. © 2007 Educational Testing Service. All rights reserved. � INFORMATION TECHNOLOGY AND LIBRARIES | SEpTEMBER 2007� INFORMATION TECHNOLOGY AND LIBRARIES | SEpTEMBER 2007 official scores for test­takers, these specialists—both ETS and ICT literacy experts—subjected the assess­ ment to a variety of review procedures at many stages of development. These reviews ranged from weekly teleconferences with consortium members during the initial development of assessment tasks (January–July 2004), to small­scale usability studies in which ETS staff observed individual students completing assessment tasks (or mockups of assessment tasks), to field trials that mirrored actual test delivery. The usability studies investigated students’ comprehension of the tasks and testing environment as well as the ease of use of the simulated software in the assessment tasks. The field trials provided opportunities to collect performance data and test the automated scoring algorithms. In some cases, ETS staff fine­tuned the scoring algorithms (or developed alternatives) when the scores produced were not psychometrically sound, such as when one element of students’ scores was inconsistent with their overall performance. Through these reviews and field trials, the iSkills assessment evolved to its current form, targeting and reporting the performance of individuals who complete the seventy­five­minute assessment. In some cases, feedback from experts and field trial participants led to significant changes. For example, the iSkills assess­ ment began in 2005 as a two­hour assessment (at that time called the ICT Literacy Assessment), that reported scores only to institutions on the aggregated perfor­ mance of their participating students. Some students entering higher education found the 2005 assessment excessively difficult, which led to the creation of the easier core level assessment. Table 1 outlines the participation volumes for the field trials and test administrations. During each field trial, as well as during the institutional administration, feedback was collected from students on their experience with the test via a brief exit survey. Table 2 summarizes some results of the exit survey. Student reactions to the test were reasonably consistent: most students enjoyed taking the test and found the tasks realistic. In writ­ ten comments, students taking the institutional assess­ ment found the experience rewarding but exhausting, and thought the amount of reading excessive. Student feedback directly influenced the design of the core and advanced level assessments, including the shorter test­ Table 1. Chronology of field trials and test administrations Date Administration Approximate no. of students Approximate no. of participating institutions July–September 2004 Field trials for institutional assessment 1,000 40 January–April 2005 Institutional assessment 5,000 30 May 2005 Field trials for alternative individual assessment structures 400 25 November 2005 Field trials for advanced level individual assessment 700 25 January–May 2006 Advanced level individual assessment 2,000 25 February 2006 Field trials for core level individual assessment 700 30 April–May 2006 Core level individual assessment 4,500 45 August–December 2006 Core level: Continuous administration 2,100 20 August–December 2006 Advanced level: Continuous administration 1,400 10 Note: Items in bold represent “live” test administrations in which score reports were issued to institutions, students, or both. ARTICLE TITLE | AUTHOR 9TESTING INFORMATION LITERACY IN DIGITAL ENvIRONMENTS | KATz 9 taking time and lighter reading load compared with the institutional assessment. As shown in table 1 (bolded rows), test administra­ tions in 2005 and early 2006 occurred within set time frames. Beginning in August 2006, the core and advanced level assessments switched to continuous testing: instead of a specific testing window, institutions create testing sessions to suit the convenience of their resources and students. The tests are still administered in a proctored lab environment, however, to preserve the integrity of the scores. ■ Student performance Almost 6,400 students at sixty­three institutions par­ ticipated during the first administrations of the core and advanced level iSkills assessments between January and May 2006. (Some institutions administered both the core and advanced level assessments.) Test­takers consisted of 1,016 high­school students, 753 community college students, and 4,585 four­year college and university stu­ dents. Institutions selected students to participate based on their assessment goals. Some chose to test students enrolled in a particular course, some recruited a random sample, and some issued an open invitation and offered gift certificates or other incentives. Because the sample of students is not representative of all United States institu­ tions nor all higher education students, these results do not necessarily generalize to the greater population of college­age students and should therefore be interpreted with caution. Even so, the preliminary results reveal interesting trends in the ICT literacy skills of participat­ ing students. Overall, students performed poorly on both the core and advanced level, achieving only about half of the possible points on the tests. Informally, the data suggest that students generally do not consider the needs of an audience when communicating information. For exam­ Table 2. Student feedback from the institutional assessment and individual assessments’ field trials Statement % agreeing Institutional assessment (N=4,898) Advanced level field trials (N=736) Core level field trials (N=648) I enjoyed taking this test. 61 59 67 This test was appropriately challenging. 90 90 86 I have never taken a test like this one before. 90 90 89 To perform well on this test requires thinking skills as well as technical skills. 95 93 94 I found the overall testing interface easy to use (even if the tasks themselves might have been difficult). 83 82 85 My performance on this test accurately reflects my ability to solve problems using computers and the Internet. 63 56 67 I didn’t take this test very seriously. 25 25 23 The tasks reflect activities I have done at school, work, or home. 79 77 78 The software tools were unrealistic. N/A 21 24 10 INFORMATION TECHNOLOGY AND LIBRARIES | SEpTEMBER 200710 INFORMATION TECHNOLOGY AND LIBRARIES | SEpTEMBER 2007 ple, they do not appear to recognize the value of tailor­ ing material to an audience. Regarding the ethical use of information, students tend not to check the “fair use” policies of information on the assessment’s simulated Web sites. Unless the usage policy (for example, copy­ right information) is very obvious, students appeared to assume that they may use information obtained online. On the positive side, test­takers appeared to recognize that .edu and .gov sites are less likely to contain biased material than .com sites. Eighty percent of test­takers correctly completed an organizational chart based on e­mailed personnel information. Most test­takers cor­ rectly categorized e­mails and files into folders. And when presented with an unclear assignment, 70 percent of test­takers selected the best question to help clarify the assignment. During a task in which students evaluated a set of Web sites: ■ only 52 percent judged the objectivity of the sites cor­ rectly; ■ sixty­five percent judged the authority correctly; ■ seventy­two percent judged the timeliness correctly; and ■ overall, only 49 percent of test­takers uniquely identi­ fied the one Web site that met all criteria. When selecting a research statement for a class assign­ ment: ■ only 44 percent identified a statement that captured the demands of the assignment; ■ forty­eight percent picked a reasonable but too broad statement; and ■ eight percent picked statements that did not address the assignment. When asked to narrow an overly broad search: ■ only 35 percent selected the correct revision; and ■ thirty­five percent selected a revision that only mar­ ginally narrowed the search results Other results suggest that these students’ ICT literacy needs further development: ■ in a Web search task, only 40 percent entered mul­ tiple search terms to narrow the results; ■ when constructing a presentation slide designed to persuade, 12 percent used only those points directly related to the argument; ■ only a few test­takers accurately adapted existing material for a new audience; and ■ when searching a large database, only 50 percent of test­takers used a strategy that minimized irrelevant results. ■ Validity evidence The goal of the iSkills assessment is to measure the ICT literacy skills of students—higher scores on the assess­ ment should reflect stronger skills. Evidence for this validity argument has been gathered since the earliest stages of assessment design, beginning in August 2003. These documentation and research efforts, conducted at ETS and at participating institutions, include: ■ The estimated reliability of iSkills assessment scores is .88 (Cronbach alpha), which is a measure of test score consistency across various administrations. This level of reliability is comparable to that of many other respected content­based assessments, such as the Advanced Placement exams. ■ As outlined earlier, the Evidence­Centered Design approach ensures a direct connection between experts’ view of the domain (in this case, ICT literacy), evi­ dence of student performance, design of the tasks, and the means for scoring the assessment (Katz et al. 2004). Through the continued involvement of the library community in the form of the ICT Literacy National Advisory Committee and development committees, the assessment maintains the endorsement of its con­ tent by appropriate subject­matter experts. ■ In November 2005, a panel of experts (librarians and faculty representing high schools, community colleges, and four­year institutions from across the United States) reviewed the task content and scoring for the core level iSkills assessment. After investigat­ ing each of the thirty tasks and their scoring in detail, the panelists strongly endorsed twenty­six of the tasks. Four tasks received less strong endorsement and were subsequently revised according to the committee’s recommendations. ■ Students’ self­assessments of their ICT literacy skills align with their scores on the iSkills assessment (Katz and Macklin 2006). The self­assessment measures were gathered via a survey administered before the 2005 assessment. Interestingly, although students’ confidence in their ICT literacy skills aligned with their iSkills scores, iSkills scores did not correlate with the frequency with which students reported per­ forming ICT literacy activities. This result supports librarians’ claims that mere frequency of use does not translate to good ICT literacy skills, and points ARTICLE TITLE | AUTHOR 11TESTING INFORMATION LITERACY IN DIGITAL ENvIRONMENTS | KATz 11 to the need for ICT literacy instruction (Oblinger and Hawkins 2006; Rockman 2002). ■ Several other validity studies are ongoing, both at ETS and at collaborating institutions. These stud­ ies include using the iSkills assessment in pre­post evaluations of educational interventions, detailed comparisons of student performance on the assess­ ment and on more real­world ICT literacy tasks, and comparisons of iSkills assessment scores and scores from writing portfolios. ■ National ICT literacy standards and setting cut scores In October 2006, the National Forum on Information Literacy, an advocacy group for information literacy policy (http://www.infolit.org/), announced the formation of the National ICT Literacy Policy Council. The policy coun­ cil—composed of representatives from key policy­making, information­literacy advocacy, education, and workforce groups—has the charter to draft ICT literacy standards that outline what students should know and be able to do at different points in their academic careers. Beginning in 2007, the council will first review existing standards docu­ ments to draft descriptions for different levels of perfor­ mance (for example, minimal ICT literacy, proficient ICT literacy), creating a framework for the national ICT literacy standards. Separate performance levels will be defined for the corresponding target population for the core and advanced assessments. These performance­level descrip­ tions will be reviewed by other groups representing key stakeholders, such as business leaders, healthcare educa­ tors, and the library community. The council also will recruit experts in ICT literacy and information­literacy instruction to review the iSkills assessment and recommend cut scores corresponding to the performance levels for the core and advanced assess­ ments. (A cut score represents the minimum assessment score needed to classify a student at a given performance level.) The standards­based cut scores are intended to help educators determine which students meet the ICT literacy standards and which may need additional instruction or remediation. The council will review these recommended cut scores and modify or accept them as appropriately reflecting national ICT literacy standards. ■ Conclusions ETS’s iSkills assessment is the first nationally available measure of ICT literacy that reflects the richness of that area through simulation­based assessment. Owing to the 2005 and 2006 testing of more than ten thousand students, there is now evidence consistent with anec­ dotal reports of students’ difficulty with ICT literacy despite their technical prowess. The results reflect poor ICT literacy performance not only by students within one institution, but across the participating sixty­three high schools, community colleges, and four­year colleges and universities. The iSkills assessment answers the call of the 2001 International ICT Literacy Panel and should inform ICT literacy instruction to strengthen these criti­ cal twenty­first­century skills for college students and all members of society. ■ Acknowledgments I thank Karen Bogan, Dan Eignor, Terry Egan, and David Williamson for their comments on earlier drafts of this article. The work described in this article represents con­ tributions by the entire iSkills team at Educational Testing Service and the iSkills National Advisory Committee. Works Cited American Library Association. 1989. Presidential committee on information literacy: Final report. Chicago: ALA. Available online at http://www.ala.org/acrl/legalis.html (accessed June 13, 2007). Brasley, S. S. 2006. Building and using a tool to assess info and tech literacy. Computers in Libraries 26, no. 5: 6–7, 43–48. Breivik, P. S. 2005. 21st century learning and information literacy. Change 37, no. 2: 20–27. Dunn, K. 2002. Assessing information literacy skills in the Cali­ fornia State University: A progress report. Journal of Academic Librarianship 28, no. 1/2: 26–36. International ICT Literacy Panel. 2002. Digital transformation: A framework for ICT literacy. Princeton, N.J.: Educational Testing Service. Available online at http://www.ets.org/Media/ Tests/Information_and_Communication_Technology_Lit­ eracy/ictreport.pdf (accessed June 13, 2007). Katz, I. R. 2005. Beyond technical competence: Literacy in infor­ mation and communication technology. Educational Technol- ogy Magazine 45, no 6: 144–47. Katz, I. R., and A. Macklin. 2006. Information and communica­ tion technology (ICT) literacy: Integration and assessment in higher education. In Proceedings of the 4th International Conference on Education and Information Systems, Technologies, and Applications, F. Malpica, A. Tremante, and F. Welsch, eds. Caracas, Venezuela: International Institute of Informatics and Systemics. Katz, I. R., et al. 2004. Assessing information and communications technology literacy for higher education. Paper presented at the 12 INFORMATION TECHNOLOGY AND LIBRARIES | SEpTEMBER 200712 INFORMATION TECHNOLOGY AND LIBRARIES | SEpTEMBER 2007 Annual Meeting of the International Association for Educa­ tional Assessment, Philadelphia, Pa. Middle States Commission on Higher Education. 2003. Develop- ing research and communication skills: Guidelines for information literacy in the curriculum. Philadelphia: Middle States Com­ mission on Higher Education. Mislevy, R. J., L. S. Steinberg, and R. G. Almond. 2003. On the structure of educational assessments. Measurement: Interdisci- plinary Research and Perspectives 1: 3–67. Oblinger, D. G., and B. L. Hawkins. 2006. The myth about stu­ dent competency. EDUCAUSE Review 41, no. 2: 12–13. Oblinger, D. G., and J. L. Oblinger, eds. 2005. Educating the Net Generation. Washington, D.C.: EDUCAUSE, http://www. educause.edu/educatingthenetgen (accessed Dec. 29, 2006). Partnership for 21st Century Skills. 2003. Learning for the 21st cen- tury: A report and mile guide for 21st century skills. Washington, D.C.: Partnership for 21st Century Skills. Rockman, I. F. 2002. Strengthening connections between infor­ mation literacy, general education, and assessment efforts. Library Trends 51, no. 2: 185–98. ———. 2004. Introduction: The importance of information lit­ eracy. In Integrating information literacy into the higher education curriculum: Practical models for transformation. I. F. Rockman and Associates, eds. San Francisco: Jossy­Bass. The California State University. 2006. Information competence initiative Web site. http://calstate.edu/ls/infocomp.shtml (accessed June 4, 2006). University of Central Florida. 2006. Information fluency initiative Web site. http://www.if.ucf.edu/ (accessed June 4, 2006). Western Association of Schools and Colleges. 2001. Handbook of accreditation. Alameda, Calif.: Western Association of Schools and Colleges. Available online at http://www.wascsenior .org/wasc/Doc_Lib/2001%20Handbook.pdf (accessed Dec. 22, 2006).