Finding foundations: A model for information literacy assessment of first-year students
In Brief
This article presents a case study in establishing an information literacy instruction and assessment program for first-year university students at the University of Colorado Denver. Rather than presenting assessment data, we document the process in which our department engaged with the student learning assessment cycle, with the intention of allowing other information literacy professionals to see how we established an instruction program for first-year English Composition. We include a description of in-class exercises, rubrics, and the procedures we followed in order to assess the foundational information literacy skills of first-year students on our campus. This assessment was not conducted to demonstrate what students learned from librarians (thereby illustrating the value of library instruction). Rather, we assessed student learning to ascertain the information literacy skills students bring with them into a first-year English Composition course.
Introduction
The longstanding model of information literacy instruction at our institution centers on a librarian-course faculty relationship. This type of relationship has been investigated by several others, including Hardesty (1995) and Badke (2005). Typically, course faculty request a one-shot instruction session delivered by a librarian. Working with the faculty member, the librarian determines the content of the session and what, if any, student learning assessment will take place. For some libraries, that assessment is connected to the Association of College & Research Libraries (ACRL) initiative that academic libraries need to “demonstrate value” (ACRL, 2017). In instruction sessions, librarians have tried to do just that by giving students some kind of pre-test or initial assessment to determine baseline skills at the beginning of a course (Henry, Glauner, & Lefoe, 2016). Next, the librarian provides instruction in some format (in-person, through online modules, an embedded assignment, etc.). Finally, the students demonstrate their learning through a summative assessment (a post-test, a final paper, etc.). The students’ final results are compared with their initial assessment results to prove that the librarian’s efforts had a transformative impact on student learning.
This model is widespread in academic libraries, and had long been the practice at Auraria Library as well–until fall 2016, when the Foundational Experiences Librarian and the Pedagogy & Assessment Librarian worked together to design and implement a department-wide information literacy curriculum for first-year English Composition courses. Using a new model of instruction and assessment, one shared lesson plan was used by the entire teaching team (including full-time instruction librarians and part-time graduate student assistants) that allowed for us to gather and analyze student learning assessment data in a group effort. The purpose of this assessment was specifically not concerned with demonstrating the library’s value. Instead, we sought to assess what students had already learned before arriving on our campus.
Doucette (2016) analyzed 39 papers published in the proceedings of the Library Assessment Conference from 2006 to 2014 and found that motivations for library assessment generally fell into two categories: the motivation to improve the library or to prove or demonstrate something about the library. Doucette’s articulation of the dichotomy between prove and improve gives us something to consider about the approach to student learning assessment in information literacy instruction. This present study addresses a gap in the literature by emphasizing the value of assessment for the purpose of examining students’ experiences and improving information literacy instruction, rather than aiming to prove how effective our library instruction has been. In this article, we provide a model for creating a sustainable cycle of instruction and assessment at an academic library by emphasizing shared curriculum, department-wide assessment activities, and coordination with a department rather than individual faculty members.
Institutional Profile
Auraria Library serves three separate institutions of higher learning—Metropolitan State University of Denver (MSU Denver), the University of Colorado Denver (CU Denver), and the Community College of Denver (CCD)—all on one shared campus in downtown Denver, Colorado. The approximate FTE enrollment across the three schools is in excess of 30,000, and the total student headcount exceeds 40,000 (Auraria Higher Education Center, 2017). There are approximately 5,000 faculty and staff who support the three institutions, including 100 library workers. All library faculty and staff are employees of CU Denver.
The Education & Outreach Services Department of Auraria Library was formed in 2015 as a sub-group of a larger public services department. The faculty and staff in the department have extensive teaching experience, and information literacy instruction has been a service provided by Auraria Library for more than twenty years. Like many academic libraries, most of Auraria’s instruction has been determined by course faculty negotiating with individual librarians. Over time, librarians developed longstanding relationships with course faculty, loosely arranged around a subject liaison model. As a result of these relationships, librarians were not in the practice of sharing lesson plans for instruction sessions with other members of the teaching team. This meant that there was little, if any, consistency between one session and another, even though librarians taught many of the same courses. In other words, the content of the information literacy instruction sessions was dependent upon the librarian customizing the session to the course faculty’s request, rather than the librarian using standard instruction curriculum for that specific course.
The authors of this article are relatively new to Auraria Library. Kevin Seeber began his role as Foundational Experiences Librarian in July 2015; Zoe Fisher started a year later, in July 2016, as the Pedagogy & Assessment Librarian. According to internal documents and conversations with colleagues, Auraria Library had never developed department-wide shared curriculum or engaged in programmatic student learning assessment prior to the 2016/2017 academic year. Previous assessment studies were limited to one subject-specialist librarian delivering instruction in one course and using pre-/post-tests to measure the impact of that individual’s instruction (Ferrer-Vinent & Carello, 2008; Ferrer-Vinent & Carello, 2011). Another study invited librarians and English Composition course faculty across the campus to opt-in to a pre-test/post-test information literacy assessment, but it did not systematically involve the entire teaching team or specific campus departments (Sobel & Wolf, 2011). Beginning in fall 2016, our approach involved every member of our teaching team (9 instruction librarians and 5 part-time graduate student assistants) planning, teaching, and assessing the same lesson plan across dozens of sections of a first-year English Composition course, similar to the model described by Gardner Archambault (2011).
Following a review of the General Education curriculum at CU Denver, we selected English Composition II, known as ENGL 2030. By reviewing syllabi and course catalog descriptions, we found that this course is the only required course for students which also includes a research component. There is not a common syllabus shared between sections, so course assignments vary from instructor to instructor. The Education & Outreach Services Department Head reached out to both the Chair of the English Department and the Coordinator of the First-Year Writing Program to explain our goals and ensure that they would be willing to partner with the library on this new approach to information literacy instruction. While many English faculty had used information literacy instruction in the past, the session outcomes varied based on the instructor’s assignments. There had not been a coordinated attempt to integrate a librarian-facilitated session across all sections of ENGL 2030 (approximately 25-40 sections per semester).
To achieve our goal to provide the same information literacy session to all sections of ENGL 2030, we developed a lesson that would:
– Be delivered in 75 minutes;
– Focus on skills that students could reasonably demonstrate in that amount of time;
– Focus on concepts over tools;
– Assess students through open-ended questions that invited their interpretation and
experiences (rather than valuing library-centric “correct” answers).
Our New Approach to Instruction
The Foundational Experiences Librarian wrote the lesson plan in consultation with the Education & Outreach Services Department Head, using an outline that he had previously used with other first-year classes. Before the adoption of this lesson, most of the instruction provided to first-year students by the teaching team had focused on selecting specific databases and developing search terms with Boolean operators. In contrast, the ENGL 2030 lesson implemented in fall 2016 does not mention databases at all, and the bulk of the session is focused on evaluating and discussing scholarly and popular articles.
We felt that it was important to focus on the concept of evaluation in this session because the ENGL 2030 curriculum requires students to find, analyze, and integrate arguments from different types of information sources, including popular and scholarly articles. To that end, we saw value in gathering student learning assessment data at the beginning of the session in order to measure and reflect on what our students know before they receive information literacy instruction. We wanted to know: what do students know about different types of information in a first-year English Composition course? Developing this baseline would help inform our future choices in other information literacy instruction sessions, including other first-year and upper-division courses. We also wanted this evaluation to reach across as many sections of ENGL 2030 as possible, which required multiple librarians to teach from a shared lesson plan.
This lesson plan represents a significant departure from the typical information literacy instruction sessions provided by our teaching team, which often focus on database selection and database search strategies. In order to help instruction librarians feel more comfortable with this new approach, we facilitated a mock teaching session where instructors participated as students in order to familiarize themselves with the lesson plan. We also regularly provide internal professional development events focused on classroom teaching skills, such as guiding discussions, managing classroom time, and lesson planning. In June 2017, we instituted a peer teaching observation program to teach instructors how to observe and give feedback on classroom instruction. Even with all of this support, some library instructors have been hesitant about using the new lesson plan. Concerns included the fact that the session is discussion-based, not demonstration-based, and that databases and library resources are not central to the session content. Some library instructors were not comfortable with facilitating large group discussion. With these concerns in mind, we continue to work on developing our teaching skills as a department.
Lesson Plan
The learning outcomes for the session are:
1. Students will be able to identify the main characteristics associated with a scholarly article.
2. Students will be able to compare scholarly articles with other types of information.
3. Students will be able to locate scholarly articles using the library website.
The 75-minute lesson plan was divided into three parts: in the first part, students were provided with two articles and a series of questions to be answered about them. In the second part, the librarian facilitated a discussion about the two articles. In the third and final segment of the lesson, the librarian briefly introduced the Library’s website and discovery layer, and invited students to do some practice searches in order to highlight the different features of the tool (e.g., the ability to save and cite sources, as well as the option to access all different formats of information in full-text online).
Unlike most information literacy assessment approaches, our student learning assessment was administered at the beginning of the session, following only a brief introduction of the intended outcomes. From there, students worked in pairs or small groups to skim through two articles—one from a scholarly, peer-reviewed journal (in the fall, we used The Journal of Higher Education and in the spring, we used American Economic Journal), the other from a popular press news outlet (The Atlantic magazine and NBCNews.com, respectively)—and answer a series of questions about how the articles are written. These questions asked students to consider several of the factors students might evaluate when reviewing a source of information, including the article’s source, the author’s credentials, the voice of the text, and the intended audience. Many of these questions are heavily informed by the ACRL Framework for Information Literacy for Higher Education (2015), especially the frames “Information Creation as a Process” and “Authority is Constructed and Contextual.”
To encourage deep and critical thinking, we asked students open-ended questions that are not necessarily evident in the text, such as, “How long do you think it took the author to research, write, and publish this article?” This question requires some analysis regarding how information is produced, and allows us to better understand students’ current perceptions of scholarly and popular information. In other cases, though, we asked more direct questions, such as, “How did the authors gather outside information/evidence for this article?” This prompt subsequently asks students to skim the articles and look for specific markers of the formative processes which led to the creation of this text.
Additionally, the exercise is framed as comparisons of “Scholarly and Popular” information, rather than the more typical “Scholarly versus Popular” naming convention. This is done to support the idea that these types of information are not in competition with another, and that the popular information with which students might be more familiar is just as useful as scholarly information, though the applicability of either type depends on context (Seeber, 2016). When facilitating the large-group discussion with the whole class, the librarian asks students to consider things like, “Which source is better?,” with the answer we want to cultivate being, “It depends.” Students should feel comfortable acknowledging that news can provide recent information and current events, as well as opinions and editorials, while scholarly articles allow researchers to investigate and answer questions using research methods appropriate to their fields.
One of the more provocative questions we posed to students was, “Which article contains the author’s opinions?” While we weren’t able to go into deep conversations around this idea, we did attempt to highlight that researchers, like any human beings, have opinions. Typically, scholars express their opinions through a researched claim supported by evidence–but other scholars may disagree with their conclusion, just as newspaper editorials present opposing opinions. We hoped that this brief discussion would begin to destabilize the notion of scholarly articles as “neutral” or “unbiased” information. In their classroom responses, we found that students were ready to engage in this kind of analysis, which forms the core of the English Composition curriculum.
It was a deliberate choice on our part not to focus the session on a demonstration of library interfaces. As Wallis (2015) writes in a blog post about discourse theory and database demonstrations, there are “so many layers of destruction inherent in my process of pointing, clicking, and narrating.” She goes on to write, “I am not modeling good search strategies, I am erasing myself as a teacher.” With this in mind, we felt it was important to focus on a discussion of the similarities and differences between scholarly and popular articles and why both are important, rather than focusing on the mechanics of how to access these types of articles. This broke with the tradition of the library teaching team and challenged the expectations of some course faculty, but overall response inside and outside the library was positive, with the English Department chair noting that the Library’s curriculum complemented the English curriculum.
Assessment Procedures
It is generally agreed that the purpose of the student learning assessment cycle is to use assessment results to inform and improve pedagogy, thereby improving the student learning experience (Roscoe, 2017). With this in mind, we learned a great deal from our initial assessment practices in fall 2016, which were modified and improved for spring 2017. This study is focused on our program and processes rather than our student learning results, so this section will highlight process improvements in our assessment practices (rather than showcasing student learning gains).
One of the simplest changes that yielded the most drastic improvements was changing how we collected students’ responses. In fall 2016, we distributed paper worksheets in class, which were completed by students using pen or pencil. The paper worksheet was divided into three columns: questions in the far left column, then spaces for the responses about Article 1 and Article 2 in the following right columns. After teaching 20 sections, we collected 170 worksheets, which were de-identified (students were asked not to write their names on the sheets, but we removed names if they did) and randomly numbered. The paper worksheets were then scored by hand, in a process that will be detailed shortly.
In spring 2017, we moved the questions to an online form. This proved advantageous for multiple reasons: one, students in online and hybrid sections were able to participate in a modified version of the lesson; and two, students in face-to-face sections (taught in computer labs) were able to type in their responses, which our team found much easier to read than handwritten answers. Another helpful improvement was including questions that allowed students to answer with radio buttons and checkboxes,rather than free text. For example, when we asked students to consider how long it took to research, write, and publish each article, their options were a series of radio buttons (Days, Weeks, Months, Years) which resulted in quick and easy scoring. In another question, we asked students how the authors researched the articles (their research methods), and their options were a series of checkboxes (so they could select multiple answers). Their options included reading other articles, interviewing people, conducting an experiment, administering a survey to a group of people, and so on.
- Hours
- Days
- Weeks
- Months
- Years
How did the authors gather outside information/evidence for this article? (Check all that apply)
- □ They talked to a few people (Interviews)
- □ They gave a survey to a lot of people (Polling)
- □ They gathered and analyzed numbers (Statistics)
- □ They read other articles (Citations)
- □ They wrote about what they think (Opinions)
- □ They did an experiment (Scientific Study)
- □ Other:
Figure 1
Printable version
How did we score the responses students typed into the online form? We found our answer in a relatively low-tech (but creative) solution: a Microsoft Word mail merge. Student responses from the form were exported to an Excel spreadsheet, where data was cleaned to remove duplicate and blank submissions. Cells were labeled accordingly, and a Microsoft Word template was created that mimicked the paper worksheet students used in Fall. The worksheet data from the Excel spreadsheet was merged into the Microsoft Word template, creating 232 paper “worksheets” with typed answers. Worksheets were randomly numbered.
In both fall and spring, student responses were scored with a rubric created by the Pedagogy & Assessment Librarian and the Foundational Experiences Librarian. The rubric provided scores for each question using a scale of Exemplary (2), Satisfactory (1), and Unsatisfactory (0). When scoring the worksheets, librarians used a simple online form to enter their scores. The scores were attached to the worksheet number, and each worksheet was scored at least three times. The scoring data was analyzed by the Pedagogy & Assessment Librarian who found consensus scores for each work sample. For example, if two librarians gave a score of 1 and one librarian gave a score of 0, then the consensus score would be 1.
What We Did with Our Assessment Results
The Pedagogy & Assessment Librarian compiled assessment results into brief reports (approximately 2-3 pages) explaining our methods and providing an overview of scores. These
reports were shared with the CU Denver English Composition department chair, the Education & Outreach Services Department, the Library as a whole, Library Administration, and the CU Denver University Assessment Committee. In general, the response to our approach was very positive. The University Director of Assessment complimented the Library’s assessment process, specifically highlighting our use of a shared rubric, multiple evaluators for each work sample, and evidence of using results to improve pedagogy (K. Wolf, personal communication, May 30, 2017).
While our assessment process yielded several benefits, the most valuable outcome was not the student learning results themselves, but rather the departmental conversations that they sparked. The process of collectively teaching the same lesson, gathering the same student work samples, and scoring student work together helped our team connect in new ways. Following the scoring sessions our colleagues opened up about pedagogy in ways that we had not done in the past, with members reflecting on student responses and discussing implications for their future practice. An inspiring example of librarians working together to review student work to improve pedagogy can be found in Holliday, Dance, Davis, Fagerheim, Hedrich, Lundstrom, & Martin (2015).
Conclusion
We are not sharing this process with the expectation that it will be, or even should be, duplicated at other institutions. The purpose of this study was not to establish an empirical truth; we were not testing a hypothesis and we do not seek for other instruction programs to replicate our findings. Instead, our aim in presenting this study is to provide an example of how libraries can conduct assessment focused on valuing students and their experiences, rather than demonstrating the value of libraries. Subsequent information literacy instruction can then build on those foundational experiences and adequately recognize the knowledge and skills students already possess, rather than viewing first-year students as entering higher education with an inherent deficit which we must address. We also believe very strongly in institutional context–that the students enrolled in one school are not the same students enrolled at another. To that end, we feel it is valuable for librarians to gather their own student learning assessment data.
In conclusion, we recommend that information literacy instruction programs, regardless of size, consider the following when approaching the development of first-year information literacy curriculum in order to develop a sustainable approach to student learning and assessment:
Lesson plans emphasize hands-on activities that develop discussions around information literacy concepts (such as “Information Creation as a Process” and “Authority is Constructed and Contextual”), not skills (such as database demonstrations);
Instruction programs coordinate instruction with departments (for example, all sections of the same course, especially required courses with research), not individual faculty;
Develop and use shared lesson plans to ensure consistency, rather than individualizing lesson plans for different faculty who teach the same class;
Collect the same assessment data to gather a meaningful number of student work samples, rather than creating different assessments for each section;
Instruction librarians work together as a team to assess all student work samples, using a shared rubric;
Reinforce the value of learning your own institutional context rather than attempting to prove that the one-shot “moved the needle” of student learning.
We’d like to thank our peer-reviewer, Lauren Wallis, as well as Ian Beilin, Annie Pho, Ryan Randall, and the entire Lead Pipe team.
References
Association of College & Research Libraries. (2015). Framework for information literacy for higher education. Retrieved from http://www.ala.org/acrl/standards/ilframework
Association of College & Research Libraries. (2017). ACRL Value of academic libraries. Retrieved from http://www.acrl.ala.org/value/
Auraria Higher Education Center. (2017). About Auraria campus. Retrieved from https://www.ahec.edu/about-auraria-campus/about-the-auraria-campus/
Badke, W. B. (2005). Can’t get no respect: Helping faculty to understand the educational power of information literacy. Reference Librarian, 43(89/90), 63-80. doi: 10.1080/01942620802202352
Doucette, L. (2016). Acknowledging the political, economic, and values-based motivators of assessment work: An analysis of publications on academic library assessment.
Retrieved from http://libraryassessment.org/bm~doc/48-doucette-2016.pdf
Ferrer‐Vinent, I. J., & Carello, C. A. (2008). Embedded library instruction in a first‐year biology laboratory course. Science & Technology Libraries, 28(4), 325-351. doi: 10.1080/01942620802202352
Ferrer-Vinent, I. J., & Carello, C. A. (2011). The lasting value of an embedded, first-year, biology library instruction program. Science & Technology Libraries, 30(3), 254-266. doi: 10.1080/0194262X.2011.592789
Gardner Archambault, S. (2011). Library instruction for freshman English: A multi-year assessment of student learning. Evidence Based Library & Information Practice, 6(4), 88-106. Retrieved from https://journals.library.ualberta.ca/eblip/index.php/EBLIP/article/view/10562/9379
Hardestry, L. (1995). Faculty culture and bibliographic instruction: An exploratory analysis. Library Trends, 44(2), 339-67.
Henry, J., Glauner, D., & Lefoe, G. (2016). A double shot of information literacy instruction at a community college. Community & Junior College Libraries, 21(1-2), 27-36. doi: 10.1080/02763915.2015.1120623
Holliday, W., Dance, B., Davis, E., Fagerheim, B., Hedrich, A., Lundstrom, K., & Martin, P. (2015). An information literacy snapshot: Authentic assessment across the curriculum. College & Research Libraries, 76(2), 170-187. doi: 10.5860/crl.76.2.170
Roscoe, D.D. (2017). Toward an improvement paradigm for academic quality. Liberal Education, 103(1). Retrieved from https://www.aacu.org/liberaleducation/2017/winter/roscoe
Seeber, K. P. (2016). It’s not a competition: Questioning the rhetoric of “scholarly versus popular” in library instruction. Retrieved from http://hdl.handle.net/10150/607784
Sobel, K., & Wolf, K. (2011). Updating your tool belt: redesigning assessments of learning in the library. Reference & User Services Quarterly, 50(3), 245-258.
Wallis, L. (2015). Smashing the gates of academic discourse: Part 1. Retrieved from https://laurenwallis.wordpress.com/2015/03/23/smashing-the-gates-of-academic-discourse-part-1/
Appendix
Scholarly and Popular Articles Worksheet
Comparing Arguments in Scholarly and Popular Articles
Start by Comparing the Articles
Briefly skim the two articles to answer these questions. Don’t worry about what the articles say—instead, these questions are about how the articles are written.
First Article | Second Article | |
---|---|---|
1. What is the title of the article? What is the title of the journal, magazine, or website that published it? | ||
2. Do the authors have any credentials listed, such as a degree or professional experience? Why does knowing this matter? | ||
3. What kind of research went into the article? How can you tell? | ||
4. What style of language is used? Please provide an example. | ||
5. Identify the intended audience of the article. Who would read this? How can you tell? |
Now, Compare their Arguments
Next, review the sheet with summaries of both article to answer these questions.
6. Do the two articles seem to agree or disagree? In what ways are their main ideas similar or different?
7. What topics do they focus on the most? Are there any areas that you think they ignored?
Scholarly and Popular Articles Rubric
ENGL 2030
Fall 2016
Exemplary (2) | Satisfactory (1) | Unsatisfactory (0) | |
---|---|---|---|
1. What is the title of the article? What is the title of the journal, magazine, or website that published it? | Identifies the correct article and periodical titles for BOTH examples (“How to Graduate from Starbucks” from The Atlantic and “Understanding Sources of Financial Support for Adult Learners” from The Journal of Continuing Higher Education). | Fully identifies the article and periodical for ONE example and partially identifies the other (e.g., gives one periodical title but not the other, or “The Money Report” instead of The Atlantic). | Only provides the article titles for BOTH examples (no periodical titles), or misidentifies the article/periodical titles entirely. |
2. Do the authors have any credentials listed, such as a degree or professional experience? Why does knowing this matter? | Identifies existence of credentials for BOTH examples. Provides rationale for how authors’ credentials impact authority/credibility (e.g., “Yes, shows their credibility on the topic”). | Identifies existence of credentials for BOTH examples. Does NOT provide rationale for how authors’ credentials impact authority/credibility. | Does NOT identify existence of credentials for BOTH examples (e.g., indicates The Atlantic article author has no credentials. |
3. What kind of research went into the article? How can you tell? | Qualifies types of research for BOTH examples. Provides reasoning based on evidence in the text (e.g. interviews, references). | Qualifies types of research for BOTH examples. Does NOT provide reasoning based on evidence in the text. | Does NOT qualify BOTH types of research (e.g., gives an amount of research rather than a type; “a lot”, “not much” etc.) |
4. What style of language is used? Please provide an example. | Differentiates the style of language for BOTH examples and provides examples from the text. | Differentiates the style of language for BOTH examples and does NOT provide examples from the text. | Does NOT differentiate the style of language for BOTH examples (indicates the same style of language for both articles), or blank. |
5. Identify the intended audience of the article. Who would read this? How can you tell? | Identifies researchers within the academic field (e.g., “continuing educators”) AND “the public” or “casual readers” for the popular source. | Identifies generic audiences for BOTH articles (e.g. “students over age 24,” or “people on financial aid”). | Does NOT differentiate between audiences (indicates the same audience for both articles), or blank |