284 Commercial Database Design vs. Library Terminology Comprehension: Why Do Students Print Abstracts Instead of Full-Text Articles? Bonnie Imler and Michelle Eichelberger Bonnie Imler is Library Director at Penn State Altoona; e-mail: bbi1@psu.edu. Michelle Eichelberger is Systems and Electronic Services Librarian at Genesee Community College; e-mail: maeichelberger@genesee. edu. © 2014 Bonnie Imler and Michelle Eichelberger, Attribution-NonCommercial (http://creativecommons. org/licenses/by-nc/3.0/) CC BY-NC When asked to print the full text of an article, many undergraduate col- lege students print the abstract instead of the full text. This study seeks to determine the underlying cause(s) of this confusion. In this quantitative study, participants (n=40) performed five usability tasks to assess ease of use and usefulness of five commercial library databases and were surveyed on their understanding of library terminology. The study revealed that more than half of the students correctly defined the term “Abstract” and over 75 percent understood “full text.” However, only 25 percent of the students were able to successfully complete all five database tasks. ver the past 10 to 15 years, while working at the refer- ence desk, the authors have come across hundreds of printouts of journal article abstracts lying unclaimed next to the library’s printers. These orphan abstracts beg the ques- tion, “Do students know the difference between an abstract and the full text of an article before they hit the print but- ton?” For all of the abstracts that were left unclaimed, the authors knew that many more were being given to professors as examples of journal articles and being used as sources for research papers. In a previous research study, the authors found that, of 39 students who had been exposed to the concepts of “abstract” and “full text” through library instruction, only 62 percent were able to find and print the full text of five articles related to their research assignment. The remaining 38 percent of students printed at least one abstract in place of the full text.1 The authors were interested in study- ing the underlying cause or causes of this disconnect. Did our students not know the difference between an abstract and the full text of an article, even when they had received instruction on these concepts? Or were the database results pages designed so that the full text of the article was too difficult to find? Or was there a combination of factors leading to the students’ obvious confusion? The authors selected Proquest as the test database for their previous study because it was the most popular database at Penn State University, and it was the only ag- gregate database listed on the “Try These crl12-426 Commercial Database Design vs. Library Terminology Comprehension 285 First” research help page. While Proquest includes a high percentage of full-text ar- ticles, the full text of some articles can only be retrieved by using SFX citation linking software to connect to more specialized databases, which are less familiar to un- dergraduates. Any student who uses SFX to retrieve the full text of a Proquest cita- tion will run into databases whose format and design are very different from that of Proquest. The authors watched as students struggled to find the full text of the articles in these databases repeatedly during the first study and noted that participants had an especially difficult time finding the full text of articles in the following five databases: JAMA, Springerlink, Oxford Journals, Cambridge, and Pediatrics (Of- ficial Journal of the American Academy of Pediatrics).2 The authors prepared a follow-up study to determine if the failure to find the full-text articles on these citation/abstract pages was due to the student failure to comprehend the difference between “ab- stract” and “full text,” or if it was due to the design of the databases’ web pages. This follow-up study is relevant because, even though most Penn State students do not start their research process with these specialized databases, they are likely to encounter these and other more obscure resources when they use the Penn State citation linking service to retrieve the full text of a citation-only article from Pro- quest or other general aggregate database. The authors designed a test scenario to determine if database design was an influence on the full-text discovery suc- cess rate. The authors followed the online testing activity with a survey on the terms “abstract,” “full text,” and “pdf,” to dis- cover how well the participating students understood the database terminology. Analysis of the terminology survey and the screen capture video from student research sessions showed that, in the case of the five databases selected, database design was more of a deterrent to task completion than student misunderstand- ing of library terminology. Literature Review Library jargon has been a problem since long before the development of online resources. According to Vaughn and Callicott, “Library terminology has con- sistently been a “sticky wicket” for librar- ians and library users…”3 Naismith and Stein’s article about library jargon from 1989 reported that patrons misunderstood library terminology approximately half of the time.4 John Kupersmith keeps a running list of terms reported by libraries as being misunderstood on his website.5 Spivey’s study from the year 2000 showed that potentially unclear library terminol- ogy appeared on all 60 of the college and university library home pages included in the study.6 A review of the literature shows that many of the problems that oc- cur during library website usability testing have more to do with patron failure to comprehend library terms such as “da- tabase, periodical, or catalog” than with poor web design.7 Kruegar et al. points out that “The majority of studies agree that users are frustrated by confusing library terminology and an overwhelming amount of information.”8 Understanding that library jargon could affect student ability to find the full text of articles from the five selected databases, the authors designed this study to include a survey to test student comprehension of the three most vital words needed for article retrieval: “abstract,” “full text,” and “pdf.” The usability study is a common and well-respected tool used for the assess- ment of academic library web pages, and the literature contains dozens of articles on the practices and issues of usability testing, most of them “how-to” studies and descriptions. Usability studies may be implemented in response to results from library evaluation tools like LibQUAL, or in preparation for a major site upgrade.9 They are also used as ongoing assessment tools for libraries.10 Usability study methodology can include reviewing website and data- base usage logs, focus groups, direct obser- vation, card sort protocol, and think-aloud protocol.11 Most academic library usability 286 College & Research Libraries May 2014 studies encompass the entire library web- site, covering usage of the library catalog, its informational pages, and information related to periodicals and databases. Many of these studies included one or two tasks designed to encourage participants to use the library’s databases,12 and several stud- ies focused specifically on the library page portion of the article retrieval process,13 but no studies have focused on the usefulness of web design of the databases themselves. A note should be made here about the difference between “ease of use” and “usefulness” in terms of usability studies. Vaughn and Callicott point out the risks involved in usability studies in their article “Broccoli Librarianship and Google-Bred Patrons.”14 Library users may be more attracted to web pages that are as easy to use as a Google search box, but focusing on the ease of use of a library website may cause the researcher to lose sight of the “usefulness” or the added value of library material over a simple Google search. Paraphrasing researcher Stanley Dicks,15 Vaughn and Callicott note that “useful- ness refers to the overall usefulness of the product. Does it do what it is supposed to do? Is it usable at all? Does it work? [as opposed to simply being easy to use]”16 Tsakonas and Papatheodorou had a slightly different perspective on the “ease of use” vs. “usefulness” debate, and con- sidered ease of use to be a crucial part of usability, which they defined as focusing on “the effective, efficient and satisfactory task accomplishment and aims to support a normal and uninterrupted interaction between the user and the system.”17 The authors of this study were interested in both ease of use and usefulness, but they designed the study to focus primarily on whether or not the five selected databases “worked,” meaning that they were useful in leading the student researcher to the full text of an article. Methodology Participants After receiving approval from the Penn State University Institutional Review Board, the authors began recruiting students for the study. In the previous study on usage of the SFX citation linking software, the authors recruited students who had already received library instruc- tion, with the hope that most of them would have heard of the SFX service during their instruction class and that they would be able to complete the tasks assigned to them by using SFX. For the follow-up study, the authors were curi- ous to see if native understanding of the terms “abstract” and “full text” made any difference in the ability of students to successfully find the full text of an article in the five selected databases. To test if the five selected databases were usable without any special training, the researchers recruited first-year students who had never received library instruc- tion during their university experience. Jannik and Whang et al. have noted that library instruction can affect usability testing results, and the authors wanted to remove that variable from the study.18 After contacting faculty members who do not regularly request library instruc- tion, the lead author was invited into eleven 100-level courses to present her research topic and to recruit participants. The author stressed that participation was voluntary and in no way influenced stu- dents’ course grades. While planning the study, the authors knew that, according to Jakob Nielson, only five students were needed for a qualitative usability study,19 but they were interested in completing a quantitative study to generalize broader user behavior, which, according to Niel- son, required twenty students.20 Hoping to gather even more quantitative data, the authors capped the student participation number at forty. Forty undergraduate first-year stu- dents over the age of seventeen were self- selected for the study. Each participant was given an appointment card marked with a time and date on which to arrive at the lead author’s office. No additional de- mographic information was captured for the study. Students were given an implied Commercial Database Design vs. Library Terminology Comprehension 287 informed consent form that described the study and any risks involved. A statement on the form indicated that completion of the computer-based research session implied that the student consented to take part in the study. Materials A basic Dell computer loaded with the screen capture software TechSmith Mo- rae was used to collect and analyze the data in this study. This computer was similar to the desktop models available to students in the library, and the default Internet browser Internet Explorer was also familiar to the participants from their use of campus computers. The authors chose to use Morae as their screen capture tool because it gave them the ability to set up discrete tasks/assignments for the participants, it allowed the participants to leave the canned assignment to perform live interaction with preloaded websites, and because it could also be set up to capture survey data. Like other screen capture software such as SnapzPro and Camtasia, Morae can be set up to record audio, video, on-screen activity, and key- board/mouse strokes during a defined period of time. The study included an electronic survey, activated by the Morae software after each participant completed the five research tasks. The survey asked the par- ticipants to define, in their own words, the terms “abstract,” “full text,” and “pdf.” The authors chose to administer the sur- vey after the completion of their research tasks so that the students would not be alerted to the importance of those terms in their research tasks, which focused on finding the location of the full text of an article on a database citation page. Research Design and Procedure Batteson et al. note that “in formal us- ability testing, users are observed using a site, or prototype, to perform given tasks or achieve a set of defined goals.”21 For this usability study, the authors set up one task to be completed in five different commercial databases. Study participants were asked to find and print the full text of five preselected articles (one per data- base) from the citation/abstract page in the following five databases subscribed to by Penn State University: JAMA, Spring- erlink, Oxford Journals, Cambridge, and Pediatrics (Official Journal of the American Academy of Pediatrics). These five databases had proved difficult for students to navigate in the authors’ previ- ous study on SFX usage in Proquest, and the authors were curious to see if student participants in the follow-up study would face similar usability challenges.22 While more obscure than Proquest or other popular aggregate databases, these five databases are commonly accessed via the aggregate databases by students who use the SFX citation linking software. While SFX enables students to link to the full text of articles in other database platforms, it does not always open the full text of the article for the student research- er. Instead, the link resolver often leads the researcher to the citation/abstract page rather than to the full text of the article. It was observed in the previous study that students who successfully followed the SFX links from Proquest to the citation/ abstract page of a different database were then unsuccessful in identifying the loca- tion of the full text of the article, often marked by the words “Full Text” or a PDF symbol.23 The authors designed this study to test student ability to locate the full text of the article from the citation/abstract page (that is to say, the SFX final landing page) in the five databases listed above. The authors designed a welcome screen in Morae that provided the par- ticipants with instructions for navigat- ing through a series of five tasks. This welcome screen appeared as a small gray box centered at the top of the computer screen. Instructions for each subsequent task were presented in a similar gray box. The task box contained a clickable URL designed to take the participant directly to the citation page of each database. (See figure 1.) The tasks were set up to be 288 College & Research Libraries May 2014 self-initiated and were not time-limited. When the participant started a task, the gray box would shrink, so as not to be a distraction. A small “show instructions” button in the shrunken gray box allowed the participants to review the instructions if necessary. An “Exit Session” button gave the participant the opportunity to exit the study at any time. Because the focus of the study was par- ticipant interaction with database search results pages, and not their database search strategies, the authors decided to create direct links to each database cita- tion/abstract page. This saved the partici- pants’ time and made the data analysis more efficient because the authors did not have to ignore or delete the extra screen capture time that it would have taken the participants to search for an article in each database. It also made the data more consistent because it negated the chance of participant typos and poor navigation, and it made it easy for every participant to interact with the same screens in the same databases. The authors chose the citation/abstract database page as their link destination because it is the page most often linked to when using citation linking software to find the full text of an article. Participants were asked to print the articles so that the researchers could iden- tify the moment in time when the students thought that they had completed the task and found the full text of the article. Without forcing the student to print, the researchers would have had to guess as to when the student thought that he or she had completed the task and found the full text. Using a “faux printing” process developed in a previous study24 to avoid potential printer technol- ogy problems and to save paper, the computer was set to send the articles to a nonexistent printer. On the welcome screen, participants were given the following instruction: “The articles you print will be sent to a printer outside of the room. After selecting print, you may move on to the next task.” All forty of the participants used the same computer for this study. To avoid having previous research session ac- tivities affect the following sessions, the authors set the “visited link” color in the Internet browser to be the same color as the “nonvisited link” color so that stu- dents would not be alerted to links that had been tried by other participants. The authors also deleted the print cache after each session so that the next participant could not compare their printing choices with those of the previous participant. Data Collection On his or her scheduled research day, the study participant met with the lead author in the library, where he or she was taken to a quiet study room with a single personal computer on a desk. The par- ticipant was given a copy of the implied informed consent form to read and was encouraged to keep it in the event that he or she needed to make contact with the investigators in the future. The research consent form informed the student that screen capture software was in use and that it would be recording movements on the screen. Figure 1 Task Box Commercial Database Design vs. Library Terminology Comprehension 289 The student was seated at the com- puter, where the lead author activated the welcome screen for the study. The lead author then left the room, and the participant began the five assigned tasks. Screen capture began when the participant clicked the Start button on the welcome screen. Upon completion of the five tasks, the library terminology survey appeared on the computer screen. After the survey questions were completed, a thank-you message appeared, followed by a button that prompted the par- ticipant to open the study room door to seek the researcher. The researcher then assigned a number to the file and saved it to the hard drive. When the student completed the session, he or she was given a $5.00 gift certificate for a local favorite convenience store located near the campus. Data Analysis After all of the student research ses- sions were completed, the authors used Morae to auto-compile task completion times and responses to the vocabulary definition section. The lead author then examined each participant recording and coded it to note when the participant com- pleted each task, and whether he or she printed the abstract of the article or the full-text article (html or pdf format). As well as indicating if a particular action had occurred, each marker also noted the mo- ment in time when that action happened so that the researchers could look at how slowly or how quickly each participant completed each task and at which point during the task the action occurred. As the lead author reviewed the files, she cre- ated ad hoc markers to note unexpected actions taken by the participants, such as printing articles outside the scope of the task. The authors used Morae to compile the marker data and exported it into Excel spreadsheets for review. Findings Database Tasks Out of the forty students, only ten (25%) successfully printed the full text of all five articles from the five different databases. Forty percent of the students printed all abstracts, and 35 percent printed 1–4 abstracts. Of the sixteen students who printed abstracts only, six of them clicked on print immediately after the URL loaded for all five articles and did not spend any time scrolling or looking for a full-text indicator. The average time to complete all five tasks and the survey question was 7.63 minutes. The shortest amount of time spent was 4.27 minutes and the longest was 12.95 minutes. (See figure 2.) Figure 2 Student Completion of research Tasks 290 College & Research Libraries May 2014 Survey Results Twenty-one, or slightly more than half of the forty participants, were able to successfully define the term “Abstract” as related to library research. Correct student definitions included “Preview of text or synopsis,” “a summary of a research article,” and “a cut-down version of the full document.” An even higher percentage of the participants (31 of 40) understood the concept of full text, defining it as “to view the entire text,” and “the whole article not just the abstract and conclusion,” and “the entire reading.” Pdf was more difficult for the students to define in their own words. Many students guessed at defin- ing the acronym (“published document file?” “preferred document format,” “print document file”), while others admitted that they did not know what the acronym stood for, but they under- stood that it was an electronic copy of the original print article. A generous interpretation of the responses, includ- ing vague responses such as “the file that the article is saved to,” resulted in twenty-two correct answers from the forty participants. Accurate responses included “the original article,” “a type of file that represents the article exactly how it looks,”and “it is opened with Adobe and contains the full article.” (See figure 3.) Discussion Database Tasks Like researchers Whang and Ring, the authors focused on task completion “as the primary evaluation method for measuring success.”25 According to this measure, only 25 percent of participants in this study were successful. Sixty per- cent of the students found the full text of at least one of the five articles, but that is a much lower success rate than the authors would hope for when Penn State Altoona students use the college’s databases to do research. Six of sixteen participants who did not find any full-text articles appeared to rush through the assignments and did not spend any time looking for a link to the full text. It is possible that they knew the difference between the full text and the abstract of each article, but they were more interested in receiving their $5.00 gift card than in spending time to thor- oughly complete the assignments. Comparison of the citation/abstract pages of the five different databases re- vealed the following web design flaws and inconsistencies that could have con- tributed to student failure to retrieve the full-text articles: Inconsistent Terminology The five databases used various means to identify the link to the full text of the article, often using several different Figure 3 Number of Students Who Correctly Defined Library Terms Commercial Database Design vs. Library Terminology Comprehension 291 methods on the same page, including the words “full text,” “html,” and “pdf.” JAMA, Oxford, & Pediatrics had very similar interfaces, but used slightly dif- ferent terminology for their links. JAMA used the terms “Full Text” and “Full Text (PDF)”, Oxford used “Full Text (HTML)” and “Full Text (PDF),” and Pediatrics used “Full Text Free” and “Full Text (PDF) Free.” Springerlink employed “Download PDF,” while Cambridge used the Adobe PDF icon next to the words “View PDF” and a globe icon next to “View HTML.” From analysis of the mouse movements of the study participants, none of these terms appeared to trigger immediate recognition. Out of the eight students who completed the tasks correctly, only one went directly to the full-text link purpose- fully without scrolling the entire length of the web page to locate the link. The JAMA database did offer one additional point of access to the full text that the other articles did not. The text of the abstract of the article was followed by a plain text link reading “Full text of this article.” (See figure 4.) Participants seemed to respond better to this wording and location on the page, as eight students found and used this link. Location, Location, Location JAMA, Oxford, and Pediatrics all located their full-text links to the right of the arti- cle abstract. Springerlink and Cambridge placed their links above the abstract. All five of these databases positioned the links so that they’d be visible to a pc/mac Figure 4 JAMA Screen 292 College & Research Libraries May 2014 user in the initial screen load, regardless of the window size. However, 88 percent of the study participants did not see these links at first glance and scrolled down the page for at least one of the five tasks. Fifteen percent of the participants did not see or identify any of the full-text links at first glance, scrolling down the page in all five databases. It is interesting that the results of this study run contrary to those of Cockrell and Jayne, who noted that the students in their study were loath to scroll down the page to look for additional information.26 The researchers noted that, after scrolling down the pages, several participants hovered the mouse over the social bookmarking links, as if they were looking for the full- text link from within the Facebook, Twit- ter, and Del.icio.us icons. It is possible that the participants were drawn to the colors on the social media icons, which stood out in comparison to the more neutral color scheme of the rest of the citation/abstract result pages. (See figure 5.) Figure 5 Social Bookmarking Links Figure 6 Oxford Journals Screen Commercial Database Design vs. Library Terminology Comprehension 293 Font Size and Link Placement When designing content for web pages, common practice is to put the most important information in the largest font, the next most important content in slightly smaller font, and so on. The authors found that, in all five databases, the largest font on the page was used for the title of the journal article, which makes sense because that is the best identifier for the article. However, since the main goal of the database vendor should be to provide access to the database content, the authors believe that the link to the full text of the journal article should have equal importance to the title, or only slightly less importance, and therefore should be created and displayed in a large font. In all five databases, the font of the links to the full text of the articles were smaller than that of the article title and were closer in size to that of the abstract text. The other problem with the full-text links on all of the database results pages was that the links were intermingled with other links and text and did not stand out clearly on the page, as seen in this screenshot from the Oxford Journals database. (See figure 6.) The Tease The SpringerLink database offered an additional feature the other databases did not, and this “feature” led to confu- sion and task failure for seven of the forty participants, or 17.5 percent of the students. An image of the first page of the article appeared directly under the abstract with the words Fulltext Preview above it. (See figure 7.) The image was formatted as a link, which seemed un- necessary, as the whole page was already visible on the citation/abstract page. Eight of the research participants clicked on the image, probably hoping to find the full text of the article, which was a logical as- sumption given that the word “fulltext” appeared in the image title. Clicking on the link caused a new window to pop up, but it did not provide the user with any additional content beyond the image that the user had already seen in the full- text preview window. Seven of the study participants were misled by this window and printed it out, thinking that it was the full-text view of the article. This database feature could be improved by redirecting the link to go to the full text of the article rather than to the first page of the article. Survey Discussion The authors were pleased with the results of the survey and were impressed with the native terminology understanding of first-year undergraduate students who had not received library instruction. More than half of the students were able to define all three research terms, and more than 75 percent understood the meaning of “full text.” This study’s 52.5 percent rate of comprehension for the term “ab- stract” was significantly higher than the percentage reported by Hutcherson in his 2004 study of library jargon, which was 36.20 percent of 297 first- and second- year undergraduates at California State University.27 A few of the answers given by the participants were more polished than the researchers would have suspected, leading to the suspicion that several par- ticipants may have used the pc’s Internet access to help them define the terms. However, even if these participants did use the Internet to help them define the terms, they chose the definitions that were most apt for the context, and the authors were satisfied that the students who listed the correct answers did have a good understanding of the concepts as related to library research. After review of the results for the survey question about the definition of “pdf,” the authors realized that they should have designed the survey to ask the participants to define the word in relation to library research. Several of the respondents simply tried to clarify the acronym. About half of the students seemed to have a general idea of what a pdf would give them in terms of research, but the authors may have gotten a higher 294 College & Research Libraries May 2014 This is a significant finding, and it speaks to the inadequacy of the web page design of the five databases that were reviewed in this study. Limitations of Study and Suggestions for Future Research The literature reveals few examinations of databases in terms of research usability and ease of use, though there have been several studies of database usability in terms of adaptive software and their compatibility with screen-reading pro- grams.28 Librarians and library staff have very little control over the appearance and ease of use of commercial databases percentage of correct answers if the ques- tion had been reworded and expanded. Correlation between Database Tasks and Survey The survey showed that more than half of the participants understood all three of the terms that they encountered in their preceding series of research tasks. More than 75 percent of the students understood what full text meant, which implies to the authors that if they had been able to find the link to the full text on each database citation/abstract page, they would have had a much higher success rate for the series of tasks than 25 percent. Figure 7 SpringerLink Screen Commercial Database Design vs. Library Terminology Comprehension 295 and have focused their studies on web pages and websites where they can easily make improvements. Another reason to avoid focusing on commercial database usability is that as soon as a researcher identifies a problem in a particular data- base, it may be fixed in the next database update. Database vendors do perform their own usability studies, and some of them even employ UX (User Experience) Managers,29 but their studies may focus on their primary audience, often career professionals, and not the needs of under- graduate students. While there are limits and complications inherent in testing commercial databases for usability, the authors decided that the study results could be generalized to point out trends in commercial database development. Allowing students to self-select for this study introduces the possibility of self- selection bias in the study results. Students who agreed to participate could have been more interested in, or familiar with, library procedures than the other students who did not choose to participate, which would skew the data toward a higher level of library terminology understanding than that of the general population. However, the high level of library term comprehen- sion did not help these students succeed at finding the full text of articles in the five databases, so a case can still be made that both library terminology incomprehen- sion and poor web page design can lead to article retrieval failure. On the other hand, students who self-selected only to receive the survey incentive, and who did not take their tasks seriously, could have skewed the results in a negative direction. However, the results still show a higher than suspected comprehension of the library terms “abstract,” “full text,” and “pdf,” and the six students who seemed to rush through the database tasks would only have made a 15 percent difference in the database task results if they had taken more time and been more successful, shift- ing the results from 25 percent success rate to a 40 percent success rate, which is still failing. Screen capture video analysis is not an exact science. The authors have made some assumptions about why study participants may have been confused by the less familiar, more obscure database citation/abstract screens. A follow-up study could include focus groups where students would be asked to provide web page design feedback, such as where they thought that they would find the full text of the article on the database results list, or what they would change about the pages to make them more user-friendly. It is possible that this research could be approved and underwritten by the da- tabase vendors in a collaborative effort that could provide valuable data for their design teams. Conclusions The authors were pleasantly surprised to see that more than half of the study participants were able to define all three terms listed in the post-task survey, even without having attended a library instruc- tion class at Penn State Altoona. Fully three quarters (75%) of the participants were able to correctly define the term “full text,” which does not correspond with the fact that only 25 percent of the participants were able to find the full text of an article in all five databases. The authors’ conclusion is that the participants understood what they were looking for but couldn’t find it on the databases’ citation/abstract screens, or were unable to correctly identify the pages that they found as full text or not full text. What can be done about this? Librar- ians can spend more time teaching col- lege students how to find the full text of articles in the library’s databases and focus on giving students the tools to identify the parts of a full-text article in any database that they might encounter during their research. Students may re- quire more extensive instruction on how to identify an abstract and how it differs from the full text of an article. They may understand the theory of how these two views of an article are different, but they 296 College & Research Libraries May 2014 may have trouble putting that theory into practice. As we move farther and farther from the world of print periodicals, to- day’s students have little sense of what an article should look like, and thus they may not be equipped to suspect that the article’s abstract is not the whole article in its full-text version. More important, librarians need to do a better job communicating with database vendors, working with them to provide feedback to make the database web pages more user-friendly for students at all levels of college, even for beginning fresh- men. According to usability expert Steven Krug, “The first immutable law of usabil- ity is, ‘Don’t make me think!’”30 Translated to the library environment, this means that librarians should not have to teach users to navigate their sites per se; rather, sites should be intuitive, and the location of needed resources should be easily iden- tified by end users.31 Database vendors need to do a better job of making their websites intuitive, with standardized full- text article terminology, clear and obvious links to the full text of the articles, and site designs that are clean and accessible for all user levels, from first-year freshmen to graduate students. Librarians need to improve their communication with vendors to help them achieve the goal of making their sites more user-friendly for all academic researchers. Notes 1. Bonnie Imler and Michelle Eichelberger, “Do They ‘Get It’? Student Usage of SFX Citation Linking Software,” College & Research Libraries 72, no. 5 (Sept. 2011): 461–62. 2. Ibid. 3. Debbie Vaughn and Burton Callicott, “Broccoli Librarianship and Google-Bred Patrons, or What’s Wrong with Usability Testing?” College & Undergraduate Libraries 10, no. 2 (Dec. 2003): 13. 4. Rachael Naismith and Joan Stein, “Library Jargon: Student Comprehension of Technical Language Used by Librarians,” College & Research Libraries 50, no. 5 (1989): 543. 5. John Kupersmith, “Library Terms Evaluated in Usability Tests and Other Studies,” October 20, 2011, available online at www.jkup.net/terms-studies.html [accessed 26 January 2012]. 6. Mark A Spivey, “The Vocabulary of Library Home Pages: An Influence on Diverse and Remote End-Users,” Information Technology and Libraries 19, no. 3 (Sept. 2000): 155. 7. Susan Augustine and Courtney Greene, “Discovering How Students Search a Library Web Site: A Usability Case Study,” College & Research Libraries 63, no. 4 (July 2002): 354–65. 8. Janice Krueger, Ron L Ray, and Lorrie Knight, “Applying Web Usability Techniques to Assess Student Awareness of Library Web Resources,” Journal of Academic Librarianship 30, no. 4 (July 2004): 287. 9. Michael Whang and Donna M. Ring, “A Student-Focused Usability Study of the Western Michigan University Libraries Home Page,” Journal of Web Librarianship 1, no. 3 (July 2007): 69–70; Marie T. Ascher, Haldor Lougee-Heimer, and Diana J Cunningham, “Approaching Usability: A Study of an Academic Health Sciences Library Web Site,” Medical Reference Services Quarterly 26, no. 2 (Summer 2007): 37–53; Kirstin Dougan and Camilla Fulton, “Side by Side: What a Compara- tive Usability Study Told Us About a Web Site Redesign,” Journal of Web Librarianship 3, no. 3 (July 2009): 217–37; Leanne M. VandeCreek, “Usability Analysis of Northern Illinois University Libraries’ Website: A Case Study,” OCLC Systems and Services 21, no. 3 (2005): 181–92; Janet Chis- man, Karen Diller, and Sharon Walbridge, “Usability Testing: A Case Study,” College & Research Libraries 60, no. 6 (Nov. 1999): 552–69. 10. Tom Ipri, Michael Yunkin, and Jeanne M. Brown, “Usability as a Method for Assessing Discovery,” Information Technology & Libraries 28, no. 4 (Dec. 2009): 181–83. 11. Don Zimmerman and Dawn Bastian Paschal, “An Exploratory Usability Evaluation of Colorado State University Libraries’ Digital Collections and the Western Waters Digital Library Web sites,” Journal of Academic Librarianship 35, no. 3 (May 2009): 228; Gwyneth H. Crowley et al., “User Perceptions of the Library’s Web Pages: A Focus Group Study at Texas A&M University,” Journal of Academic Librarianship 28, no. 4 (July 2002): 205; Dominique Turnbow et al., “Usability Testing for Web Redesign: A UCLA Case Study,” OCLC Systems and Services 21, no. 3 (2005): 226–34. 12. Whang and Ring, “A Student-Focused Usability Study”; Elizabeth Stephan, Daisy T. Commercial Database Design vs. Library Terminology Comprehension 297 Cheng, and Lauren M. Young, “A Usability Survey at the University of Mississippi Libraries for the Improvement of the Library Home Page,” Journal of Academic Librarianship 32, no. 1 (Jan. 2006): 35–51. 13. Lydia Dixon et al., “Finding Articles and Journals via Google Scholar, Journal Portals, and Link Resolvers: Usability Study Results,” Reference & User Services Quarterly 50, no. 2 (2010): 170–81; Barbara J. Cockrell and Elaine Anderson Jayne, “How Do I Find an Article? Insights from a Web Usability Study,” Journal of Academic Librarianship 28, no. 3 (May 2002): 122. 14. Vaughn and Callicott, “Broccoli Librarianship and Google-Bred Patrons.” 15. R. Stanley Dicks, “Mis-usability: On the Uses and Misuses of Usability Testing,” in Pro- ceedings of the 20th Annual International Conference on Computer Documentation (Toronto, Ontario: Association for Computing Machinery, 2002). 16. Vaughn and Callicott, “Broccoli Librarianship and Google-Bred Patrons,” 6. 17. Giannis Tsakonas and Christos Papatheodorou, “Exploring Usefulness and Usability in the Evaluation of Open Access Digital Libraries,” Information Processing & Management 44, no. 3 (May 2008): 1238. 18. Heather Jeffcoat King and Catherine M. Jannik, “Redesigning for Usability: Information Architecture and Usability Testing for Georgia Tech Library’s Website, OCLC Systems & Services 21, no. 3 (2005): 235–243; Whang and Ring, “A Student-Focused Usability Study,” 82. 19. Jakob Nielsen, “Why You Only Need to Test with 5 Users (Jakob Nielsen’s Alertbox),” March 19, 2000, available online at www.nngroup.com/articles/why-you-only-need-to-test-with- 5-users/ [accessed 26 January 2014]. 20. Nielsen, “Quantitative Studies: How Many Users to Test? (Jakob Nielsen’s Alertbox),” June 26, 2006, available online at www.nngroup.com/articles/quantitative-studies-how-many-users/ [accessed 26 January 2012]. 21. Brenda Battleson, Austin Booth, and Jane Weintrop, “Usability Testing of an Academic Library Web Site,” The Journal of Academic Librarianship 27, no. 3 (2001): 189. 22. Imler and Eichelberger, “Do They ‘Get It’?” 462. 23. Ibid. 24. Ibid. 25. Whang and Ring, “A Student-Focused Usability Study,” 82. 26. Cockrell and Jayne, “How Do I Find an Article?” 27. Norman B. Hutcherson, “Library Jargon: Student Recognition of Terms and Concepts Commonly Used by Librarians in the Classroom 1,” College & Research Libraries 65, no. 4 (July 2004): 352. 28. Ron Stewart, Vivek Narendra, and Axel Schmetzke, “Accessibility and Usability of Online Library Databases,” Library Hi Tech 23, no. 2 (2005): 265–86; Suzanne L. Byerley and Mary Beth Chambers, “Accessibility and Usability of Web-Based Library Databases for Non-Visual Users,” Library Hi Tech 20, no. 2 (2002): 169. 29. Alexa Mantell and Amanda Mulvihill, “Building a Better User Experience,” Information Today 29, no. 1 (Jan. 2012): 1–36. 30. Steven Krug, Don’t Make Me Think: A Common Sense Approach to Usability, 2nd ed. (Berkeley, Calif.: New Riders, 2006). 31. Ascher, Lougee-Heimer, and Cunningham, “Approaching Usability,” 39.