College & Research Libraries vol. 79, no. 4 (May 2018) 517 The Boolean Is Dead, Long Live the Boolean! Natural Language versus Boolean Searching in Introductory Undergraduate Instruction M. Sara Lowe, Bronwen K. Maxson, Sean M. Stone, Willie Miller, Eric Snajdr, and Kathleen Hanna* Boolean logic can be a difficult concept for first-year, introductory students to grasp. This paper compares the results of Boolean and natural language searching across several databases with searches created from student research questions. Performance differences between databases varied. Overall, natural search language is at least as good as Boolean searching. With evidence that students struggle to grasp Boolean searching, and may not use it even after instruction, it could be left out of first-year instruction, freeing up valuable class time to focus on concepts such as question devel- opment and source evaluation. As the Framework for Information Literacy does not specifically address Boolean operators, the authors suggest it should have less prominence in first-year Information Literacy instruction. Introduction Conventional wisdom considers knowledge of Boolean logic a basic information retrieval Information Literacy (IL) skill. Librarians and other information profes- sionals are taught the value of Boolean searching (referred to throughout this article interchangeably as “Boolean”) in professional education, and it is seen in instruction, reference interactions, and database interfaces. However, the concept can be difficult for first-year (introductory) students to grasp, and it can take multiple sessions before a student demonstrates effective use of Boolean logic. A student’s ability to use Boolean operators is a performance indicator within the Association of College and Research Libraries (ACRL) Information Literacy Compe- * M. Sara Lowe is Educational Development Librarian in the University Library at Indiana University- Purdue University Indianapolis; e-mail: mlowe@iupui.edu. Bronwen K. Maxson is Romance Languages Librarian in Norlin Library at University of Colorado Boulder; e-mail: bronwen.maxson@colorado.edu. Sean M. Stone is the Librarian at the Indiana University School of Dentistry; e-mail: smstone@iu.edu. Willie Miller is Informatics & Journalism Librarian in the University Library at Indiana University- Purdue University Indianapolis; e-mail: wmmiller@iupui.edu. Eric Snajdr is Science Librarian in the University Library at Indiana University-Purdue University Indianapolis; e-mail: esnajdr@iupui.edu. Kathleen Hanna is Physical Education/Tourism Management Librarian in the University Library at Indiana University-Purdue University Indianapolis; e-mail: kgreatba@iupui.edu. ©2018 M. Sara Lowe, Bronwen K. Maxson, Sean M. Stone, Willie Miller, Eric Snajdr, and Kathleen Hanna, Attribution-NonCommercial (http://creativecommons.org/licenses/by-nc/4.0/) CC BY-NC. doi:10.5860/crl.79.4.517 mailto:mlowe@iupui.edu mailto:smstone@iu.edu mailto:wmmiller@iupui.edu mailto:esnajdr@iupui.edu mailto:kgreatba@iupui.edu http://creativecommons.org/licenses/by-nc/4.0/ https://doi.org/10.5860/crl.79.4.517 518 College & Research Libraries May 2018 tency Standards for Higher Education.1 These standards, though rescinded in 2016, influenced IL education across the United States and beyond for more than fifteen years. In studies of first-year students’ IL skills, demonstrated knowledge of Boolean logic is frequently evaluated as a determinant of information retrieval proficiency. This has led to many librarians teaching Boolean logic in one-shot instruction sessions, first-year IL modules and tutorials, and reference interactions. Anecdotally, the authors have worked at multiple institutions and taught Boolean regularly, generally introducing the concept at the first-year (introductory) level, and building on that in upper-level classes. Interestingly, while the ACRL Standards specifically mention Boolean, the new Framework for Information Literacy for Higher Education2 only refers to searching (controlled vocabulary, keywords, natural language). For students, it is not always clear that using Boolean logic is better than a natural language or phrase search in the style of Google. Anecdotally, librarians have seen stu- dents’ natural language searches yield relevant results explaining why students often find Boolean searching superfluous. Librarians and faculty could use time dedicated to teaching Boolean logic to teach other IL concepts (such as question development and source evaluation) in courses or disciplines in which Boolean searching is not es- sential. This is particularly the case in introductory courses. Moreover, the instruction of technical Boolean logic is out of sync with the Framework and its less mechanical, more conceptual approach to IL. Natural language searching has various definitions. For the purposes of this article, we define natural language as searching in phrases or sentences instead of a structured search query using operators and/or punctuation. Since their creation in the 1990s, natural language search algorithms have improved dramatically. However, there have not been any recent studies evaluating the efficacy of natural language searching com- pared to Boolean searching, and many of the older studies were from an information retrieval, rather than a pedagogical, perspective. If using Boolean logic correctly is challenging for students, is poorly used, and has no clear advantage in using it to retrieve relevant results, why are we trying to teach it to first-year, introductory students? This study sought to answer the question: Was there any advantage, based on the relevance of search results, to teaching Boolean to first-year students? This article investigates the efficacy of retrieval of Boolean searching compared to Google-style natural language searching for simple research questions of the kind used in research projects for many introductory undergraduate courses. Literature Review There are three areas most relevant to the current study: 1) how students search for information; 2) the effectiveness and limitations of Boolean and natural language searching; and 3) how librarians teach Boolean in information literacy instruction. There is extensive research on how students search for and find information, pri- marily the “principle of least effort” noted famously in library literature by Mann.3 Convenience is a major factor;4 multiple studies found that students prefer to search for information in the easiest possible way to complete their research quickly.5 This generally translates into students spending little time evaluating search results, often not venturing past the first article and rarely moving beyond the first page of results.6 Students are effectively turning over the evaluation process to the search algorithm. Specifically regarding Boolean, Dempsey and Valenti note that students in an intro- ductory English Composition course used odd combinations of Boolean (for instance, “NOT kids AND kids”, accidental use of OR, confusing AND and OR) demonstrating unfamiliarity with how to effectively use the connectors.7 Boolean use in OPACs, li- brary databases, and the web has been copiously studied, most concluding that users The Boolean Is Dead, Long Live the Boolean! 519 have trouble understanding how to use Boolean to get relevant results and often use Boolean incorrectly.8 Multiple studies have found only a small percentage of students use Boolean in their searches, and that is usually limited to the AND operator.9 In addition, when breaking their research question or thesis into keywords (Boolean- type searches), students tend to search a limited number of terms, usually around two.10 But, when they can pose their information need as a question (natural language, phrase searching), they use more terms, which may actually help improve retrieval of relevant results11 and, most important, increase students’ self-efficacy in the search process.12 Indeed, as Taylor noted in 1962, asking users to translate a complex question into keywords is an oversimplification of their information need.13 Boolean logic for information retrieval has been used since the 1960s, and is just one of many retrieval models (such as vector space or fuzzy set).14 Decades of information retrieval literature note the limitations of Boolean searching. Of particular importance to the current study is that Boolean searching can be difficult for novice users and often requires a trained intermediary (such as a librarian) to help craft an effective search.15 The literature indicates that even people trained in the mathematical concept of Boolean logic have trouble applying it to information retrieval.16 The 1990s heralded a new retrieval model, commonly called natural language, be- cause users do not need to enter searches as Boolean statements.17 Multiple studies in the ’90s found a slight edge to Boolean over natural language in retrieval of relevant results but not to the degree that one would be preferable over the other.18 A 1995 ar- ticle evaluating Boolean versus natural language (phrase) searching in full-text online medical textbooks found there was no statistically significant difference in recall or precision between the types of searches.19 Conversely, Turtle, in 1994, found natural language queries were more effective (for instance, they retrieved more relevant re- sults) than Boolean, even for expert searchers comfortable with Boolean.20 Ford, Miller, and Moss, in an analysis of AltaVista web search logs, found more relevant results were retrieved with best-match searching (any algorithm for ranking results according to their relevance to a search query), not with Boolean.21 Research has shown that systems can be developed to produce more accurate natural language searches.22 Beyond understanding student use of search strategies, it is also important to under- stand how librarians are teaching Boolean. There is surprisingly little in the literature about exactly how Boolean is taught to students, at what student level, and to what effect. The literature offers more from a theoretical perspective. Although discussing Boolean in contrast to a discovery system, which is outside the scope of the present article, Cmor and Li in 2012 noted that not having to teach Boolean would free up instruction time that could be used to delve more deeply into evaluating and engaging with sources rather than dealing with the mechanics of a search.23 Similar conclusions were drawn by Buck and Mellinger in 2011 who, in a survey of librarians using Summon, found respondents were spending more time teaching threshold concepts such as the research process, evaluating results, and peer review, and less time on Boolean and database choice.24 It is interesting that both of these studies predate, but address issues presented in, the Framework. This speaks to a disconnect in the profession between the Standards and the Framework as well as the reasons for, and the efficacy of, teaching Boolean. In the Dempsey and Valenti study mentioned above, the authors describe their teach- ing methodology, which involves selecting keywords and evaluating sources but does not specifically include Boolean.25 The curriculum Burns discusses in a nursing first- year seminar does note that students must use Boolean terminology in their searches but does not address how students used Boolean nor does it provide any assessment of students’ skills.26 Quarton explains how faculty can develop an assignment so that 520 College & Research Libraries May 2018 students can effectively write a research paper, including developing keywords and search statements using the Boolean operator AND, but with no assessment of whether students learn and retain this knowledge.27 While the present study does not include assessment of student work, there are studies that support our findings that do. Multiple studies examine assessment of library instruction of Boolean search techniques. Lacy and Chen discuss a curriculum for an introductory composition course in which they taught and assessed student use of Boolean terms. Highlighting the complexity of Boolean for novice users, the authors note that, after instruction all students structured a search using Boolean at least once, “even if they eventually reverted to natural syntax (single keywords or phrases)” [emphasis added].28 Novotny studied student learning after an instruction session on searching the OPAC and found limited evidence that students effectively applied Boolean search techniques, with a few using advanced search strategies and the Boolean OR.29 But this was not a longitudinal study, so it is unclear if students continued to use these strategies after the instruction session. Vine bluntly stated: “And then they will leave the class, go home, and… they will revert to the ‘plug-in-the-keyword’ approach.”30 Theoretically, the move from the ACRL Standards for Information Literacy31 to the Framework for Information Literacy for Higher Education32 would tend to indicate either a lessening of the importance of Boolean or, perhaps more accurately, an under- standing of broadening the focus to search strategies rather than a narrow emphasis on Boolean. As mentioned, while the ACRL Standards specifically reference Boolean, the Framework only refers to searching (controlled vocabulary, keywords, natural language). In other words, while the Standards presented more of a rote, mechanical model in specifically mentioning Boolean, the Framework allows librarians to take a conceptual approach in teaching search strategies. Multiple authors have addressed the Framework and how it offers a different and promising approach to Information Literacy instruction for librarians, faculty, and students.33 A two-part study by Scott indicates undergraduate students can grasp these concepts.34 Foasberg highlights how the origins of the present study are exactly represented in the Framework: “The Framework’s embrace of constructivist philosophy—which holds that knowledge is constructed and reconstructed through social interactions— makes it less reductive and more inclusive than the Standards’ positivist approach, which assumes that information is objective and measureable.”35 Why do we teach Boolean? Questioning that seemingly fundamental IL skill is, in effect, the function of the Framework. This research compares the efficacy of Boolean versus natural language searching in light of current database functionality to determine the usefulness of including Boolean instruction, including limiters, search persistence (like going beyond the first page), and variations within and between databases, in introductory undergraduate courses. We sought to answer several questions. Do Boolean searches retrieve more relevant results than natural language searches? Is there any advantage to using filters such as the scholarly article limiter? How accurate are database relevance ranking algorithms (on and beyond the first page) and should librarians be concerned if students do not go beyond the first page? Methodology We began by doing an initial search of Academic Search Premier, with Boolean and natural language queries based on a sample research topic. For simplicity’s sake, and because most students rely solely on it, we only used the Boolean operator AND. The authors performed each query twice from the basic search screen, first with no limiters and then using the “scholarly (peer reviewed) journals” filter. Four searches The Boolean Is Dead, Long Live the Boolean! 521 were performed: unfiltered Boolean; filtered Boolean; unfiltered natural language; and filtered natural language. We evaluated the first twenty-five search results (n = 25) for relevance using a rubric (see table 1 for rubric and table 2 for norming question). The rubric was normalized during this test process and led to additional descriptive language for each category including the “or” statements, which widened the inclu- sion of articles in various categories on the scale of 0 = not relevant to 3 = very relevant. After the rubric was normed, we created three research projects and query sets based on actual student research questions similar to those seen in introductory, undergradu- ate courses. These were designed to reflect diversity in subject material and database scope and content. From these questions, we created standardized queries as were used in the testing phase. Authors formed three interrater pairs and searched each of the three project queries in two or three of eight databases (see table 3). Searches were completed in 2016 between July 5 and August 3. Authors chose databases based on the perceived likelihood of an undergraduate encountering and using them to find resources for an introductory research project or paper. In an effort to cover as many disciplines as possible, the authors included: Academic Search Premier (EBSCO); Google Scholar;36 JSTOR; LexisNexis Academic; ProQuest Central; PubMed; Scopus; and Web of Science. There has been quite a bit of previous research comparing various databases and Google Scholar,37 often focused on STEM disciplines38 and primarily comparing Scholar to Web of Science, Scopus, and PubMed.39 Each database was searched in the same manner as Academic Search Premier (EBSCO) was in the initial test phase, with four searches when possible: unfiltered Boolean; filtered Boolean; unfiltered natural language; and filtered natural language. Filtered searches included limits for scholarly/peer-reviewed articles or the closest equivalent when available: Google Scholar could not be limited in this way; JSTOR was limited to “Articles”; LexisNexis Academic, to “Law Reviews”; ProQuest Central, to “Peer reviewed”; PubMed, to “Journal Article”; Scopus and Web of Sci- ence, to “Article.” TABLE 1 Article Relevance Evaluation Rubric Not Relevant (0) Less Relevant (1) Relevant (2) Very Relevant (3) 0 of total concepts represented OR false hits, terms are there but used in different ways (e.g., social work instead of social rejection) Less than half concepts represented OR concepts are there but not relevant to research question Majority or all of concepts represented either in title or abstract but when looking at abstract, may be tangential to research question All concepts represented in title or abstract and abstract is relevant TABLE 2 Norming Research Topic & Search Query What are the effects of social rejection on lesbians? Boolean query: Social rejection and lesbians Natural language query: Effects of social rejection on lesbians 522 College & Research Libraries May 2018 The authors captured the first twenty-five search results with screenshots or other export tools for later comparison (n = 25 for all databases and all searches, except for the filtered natural language search in Academic Search Premier, for which only 20 results were returned). We recorded rubric scores for relevance as well as overlap between filtered and unfiltered searches. Interrater pairs met to discuss and normal- ize scores. During norming, the authors determined reading the full text of all articles was infeasible and, more important, not representative of the way students would quickly scan and evaluate articles. Because of this, rubric scores were based solely on the title and abstract. Results Overall Results As might be expected, general searches such as these returned a rather high number of overall results. Natural language searches retrieved fewer total results than their Boolean counterparts (for example, unfiltered Boolean versus unfiltered natural lan- guage) in all databases except JSTOR and LexisNexis Academic, where the trend was reversed (see table 4). As mentioned above, this study only evaluated the first 25 results from each search using the rubric. Average rubric scores for all four searches (unfiltered Boolean, filtered Boolean, unfiltered natural language, and filtered natural language) ranged from 1.98 to 2.08 (out of a high of 3), solidly in rubric level 2 = relevant (see figure 1). The unfiltered natural language search query received the highest average score of 2.08, and the filtered Boolean search query received the lowest average score of 1.98. Relevance means the majority of all concepts in the search results, in either the title or the abstract, were present, even if they may be tangential to the research question. In other words, a first-year student will likely find information about a topic using either search method, with or without the filters. When looking at only the first page of the search results, the range shifted up to 2.03–2.11, with the unfiltered natural language search again having the highest rubric scores (see figure 1). (N for the first page varied by database from n = 10 to n = 50; TABLE 3 Search Queries and Databases Searched Sample Topic Search Queries Databases Searched What are the effects of television advertising on children? Boolean query: Television advertising AND children Natural language query: Effects of television advertising on children Academic Search Premier Google Scholar JSTOR How can the U.S. tourism industry combat human trafficking? Boolean query: Tourism industry AND human trafficking Natural language query: U.S. tourism industry combat human trafficking LexisNexis ProQuest Central What is the effect of stress on women in the workplace? Boolean query: Stress AND women AND workplace Natural language query: Effect of stress on women in the workplace PubMed Scopus Web of Science The Boolean Is Dead, Long Live the Boolean! 523 see database results section for details.) This indicates the results on the first page are slightly more relevant than the full 25 results. Since research shows students do not usually go beyond the first page of search results,40 it is encouraging that database algorithms return more-relevant articles first. Results suggest the database’s internal relevance algorithm is slightly outperforming the librarian’s Boolean search. A one- way between subjects ANOVA was conducted, and there was not a significant effect between mean rubric scores at the P < .05 level. To get a more granular understanding of the relevance of search results, we divided them into thirds. The top third is interesting because, just as research shows students rarely go past the first page of results, “the farther down the first page a result appears, the less critically it is evaluated.”41 Were there really relevant results further down the page? There is not a wide difference in thirds—but results do get slightly less relevant further down the page (see figure 2). The top third of results (results 1–8) were the most relevant, with the middle and bottom third dropping slightly. On average, these databases are sending more relevant references to the top of the search results, although a one-way between subjects ANOVA was conducted and there was not a significant effect between any of the thirds at the P < .05 level. Results by Database When looking at individual, rather than aggregate, database results, a slightly different story emerges, with some databases clearly outperforming others in returning more relevant results (see figure 3). (Unless otherwise mentioned, as mentioned above, n = TABLE 4 Number of Total Results Per Search by Database for Each Rater Unfiltered Boolean Filtered Boolean Unfiltered Natural Language Filtered Natural Language Academic Search Premier Rater 1 895 286 36 20 Rater 2 891 282 36 20 Google Scholar Rater 1 618,000 same as unfiltered 387,000 same as unfiltered Rater 2 569,000 same as unfiltered 383,000 same as unfiltered JSTOR Rater 1 19,528 13,928 91,177 68,446 Rater 2 19,528 19,528 91,177 91,176 LexisNexis Academic Rater 1 535 108 997 611 Rater 2 539 108 997 614 ProQuest Central Rater 1 7,893 1,063 1,761 233 Rater 2 7,882 1,061 1,753 233 PubMed Rater 1 601 600 98 98 Rater 2 601 600 98 98 Scopus Rater 1 992 834 301 253 Rater 2 988 829 299 251 Web of Science Rater 1 699 645 180 168 Rater 2 698 644 180 168 Note: Only the first 25 results were evaluated for each search. 524 College & Research Libraries May 2018 25 with the exception of Academic Search Premier filtered natural language where n = 20.) Overall, Academic Search Premier performed the best among all four searches with an average rubric score of 2.56. Google Scholar was a close second, at 2.50. That puts these two databases close to rubric level 3 = very relevant. On the low end were ProQuest Central at 1.25 and LexisNexis Academic at 1.39, both closer to rubric level 1 = less relevant. In the middle, with rubric scores hovering around rubric level 2 = relevant, were JSTOR (2.34), Web of Science (2.18), Scopus (2.12), and PubMed (1.94). When looking at consistency, or similarity of rubric scores, between the four searches within a database, Academic Search Premier and Google Scholar also performed the best. ProQuest Central had the widest score range, 1.92 for unfiltered Boolean but only 0.52 for filtered natural language. Because this research focused on first-year or FIGURE 1 Average Rubric Scores for All Data FIGURE 2 Rubric Results by Top, Middle, and Bottom Third of First 25 Results (Average of both raters, average of all databases) The Boolean Is Dead, Long Live the Boolean! 525 introductory students, it is interesting to note that, of the typical “first-year” or general subject databases, Academic Search Premier and Google Scholar greatly outperformed ProQuest Central. A one-way between subjects ANOVA was conducted to compare the differences between scores within each database confirms this consistency (or lack of consistency in the case of ProQuest Central), with no significant difference at the P < .05 level except in the case of ProQuest.42 Returning to rubric scores from the first page of results, but this time by database, there is some fluctuation in relevance (see figure 4). This is due to the default number of results displayed per page by each database: Academic Search Premier, n = 50; JS- TOR and LexisNexis Academic, n = 25; ProQuest Central, PubMed, and Scopus, n = 20; Google Scholar and Web of Science, n = 10. As mentioned in the methodology, we only reviewed the first 25 results. Therefore, average scores did not change for Academic Search, JSTOR, or LexisNexis Academic. All but one of the other databases saw their average scores increase, most modestly, when only examining average scores from the first page. The largest increase was Google Scholar, with a 2.5 average that increased FIGURE 3 Average of Interrater Pairs Rubric Scores by Database and Search Query Type FIGURE 4 What’s on the First Page (by Database)* *Numbers after database names indicate default number of results per page 526 College & Research Libraries May 2018 to 2.7 with only first-page results. PubMed was the only database whose average fell, 1.94 for all 25 results to 1.91 with only first-page results. A one-way between subjects ANOVA again confirms the changes were not significant. When examining the mean and standard deviation of each rater for each database, standard deviation was lowest on either end of the spectrum (see table 5). Academic Search Premier had a lower standard deviation and some of the highest overall rubric scores, while ProQuest Central had a lower standard deviation but the lowest overall rubric scores. This also highlights the database that had the widest spread of rubric scores by rater. While almost all were consistent between raters, PubMed search results varied dramatically between Rater 1 and Rater 2, leading to disparate rubric scores between raters. Scopus searches, conversely, also had a significant lack of overlap in results between raters; however, this was not expressed as a difference in rubric score implying different articles of similar relevance. While database averages tell us how well the database performed overall, the authors wanted to better understand the nuance of how each database searched (obviously without knowing the proprietary algorithms). To do this, we analyzed the overlap percentage by database, overlap percentage of filtered versus unfiltered results, and database precision. Overlap percentage by database gives a sense of how different results are between searches as well as giving a sense for changes in database content (see figure 5). When each rater’s results were compared for their unfiltered Boolean versus unfiltered natu- ral language searches, what was the overlap rate?43 A low overlap percentage means TABLE 5 Mean and Standard Deviation by Database by Rater for each Search Unfiltered Boolean Filtered Boolean Unfiltered Natural Language Filtered Natural Language Mean Standard Deviation Mean Standard Deviation Mean Standard Deviation Mean Standard Deviation Academic Search Premier Rater 1 2.48 0.510 2.48 0.510 2.64 0.569 2.65 0.489 Rater 2 2.48 0.510 2.44 0.507 2.64 0.569 2.65 0.489 Google Scholar Rater 1 2.44 0.768 2.44 0.768 2.56 0.712 2.56 0.712 Rater 2 2.44 0.768 2.44 0.768 2.56 0.712 2.56 0.712 JSTOR Rater 1 2.16 1.281 2.52 1.005 2.08 1.222 2.32 0.988 Rater 2 2.40 1.041 2.40 1.041 2.40 1.041 2.40 1.041 LexisNexis Academic Rater 1 1.48 0.714 1.20 .0707 1.32 0.945 1.48 0.714 Rater 2 1.48 0.714 1.48 0.714 1.16 0.850 1.48 0.714 ProQuest Central Rater 1 1.92 0.862 1.08 0.812 1.48 0.770 0.52 0.586 Rater 2 1.92 0.862 1.08 0.812 1.48 0.770 0.52 0.586 PubMed Rater 1 2.52 0.714 2.52 0.714 2.48 0.963 2.48 0.963 Rater 2 1.28 0.737 1.24 0.723 1.48 0.963 1.48 0.963 Scopus Rater 1 2.04 1.020 2.20 1.000 2.32 0.900 2.48 0.918 Rater 2 2.04 0.935 1.88 0.881 2.00 0.957 2.00 1.000 Web of Science Rater 1 2.00 1.225 2.08 1.115 2.28 0.980 2.28 0.980 Rater 2 2.04 1.207 2.12 1.092 2.32 0.945 2.32 0.945 The Boolean Is Dead, Long Live the Boolean! 527 results are closer to being unique; a user would find more relevant results by doing both searches. The 12 percent overlap rate for Academic Search Premier is quite low; a user might benefit from doing both a Boolean and natural language search. However, the overlap rate was similar for both raters, indicating database content was stable and raters found similar results. This is also true for Google Scholar and Web of Science. The wide difference in overlap rates for rater 1 versus rater 2 in both PubMed and Scopus is interesting and indicates frequently updated content. The timing of a search explains some of the discrepancies in overlap between raters and within rater searches. In general, overlap analysis indicates that natural language and Boolean searches are finding different (unique) results, suggesting researchers looking for all articles on a topic should do both searches to increase recall. Something interesting happens when we examine only results that scored a 3 = very relevant on the rubric. Again, comparing the overlap of the unfiltered searches between Boolean and natural language syntax and remembering that a low overlap percentage means results are unique, most databases are above 50 percent, with Academic Search Premier more than 90 percent (see figure 6).44 This means that, for very relevant results, the search type does not matter (unfiltered Boolean or unfiltered natural language), the same very relevant results were found. For first-year students who struggle with constructing Boolean searches, they will retrieve the same highly relevant results with a natural language search. In other words, it would not be necessary to use Boolean to find relevant results in Academic Search Premier. So, if highly relevant results have a lot of overlap, thus negating the necessity of introductory students using Boolean, is there a reason to use filters in a search? When examining overlap percentages, do filters really make a difference in the uniqueness of the results? Here, a high overlap percentage indicates high duplication between searches and would mean the filter is unnecessary (see figure 7).45 Note that Google Scholar overlap is not available because filtered results were identical to unfiltered results. PubMed, Scopus, and Web of Science have very high overlap percentages, indicating filtering has little to no effect on the diversity of the results. A low overlap percent- age means the filter is causing more unique and relevant results to be included in the search results. In practice, the utility of filters is most apparent in ProQuest Central and Academic Search Premier, two of the most commonly taught databases for first-year FIGURE 5 Overlap Percentage by Database (Unfiltered Boolean vs. Unfiltered Natural Language 528 College & Research Libraries May 2018 students. Filters also had some effect in JSTOR and LexisNexis Academic. Interestingly, the use of filters in Academic Search Premier had more effect with Boolean searches than natural language searches. Because of the disproportionate percentages between search query style, these findings suggest a divergent approach to instruction with that database in particular. We next looked at the unfiltered searches to compare their precision. Precision is the number of relevant or very relevant articles (scoring either a 2 or 3 on the rubric) divided by the total number of citations retrieved (n = 25). Overall, results were simi- lar between unfiltered Boolean and unfiltered natural language with the exception of ProQuest Central, which had an almost 20-point advantage of Boolean over natural FIGURE 6 Overlap Percentage by Database—Results Scoring 3 on the Rubric (Unfiltered Boolean vs. Unfiltered Natural Language) FIGURE 7 Overlap Percentage by Database (Unfiltered vs. Filtered Boolean and Unfiltered vs. Filtered Natural Language) The Boolean Is Dead, Long Live the Boolean! 529 language (see figure 8). In half of the databases, natural language searches were slightly more precise than Boolean. In the remaining half, Boolean searches were more precise than natural language searches in three, and there was a tie in the case of PubMed. This indicates that either style of searching retrieves relevant results. Discussion Do Boolean searches retrieve more relevant results than natural language searches? For the majority of databases included in this study, both Boolean and natural language search- ing delivered results of highly comparable relevance. For both types of searches, the average relevance as well as the precision ratio of search results within each database (with the exception of ProQuest Central) was strikingly similar. “First-year” data- bases Academic Search Premier and Google Scholar performed exceptionally well. The variability in both the average relevance and the precision ratio of search results in ProQuest Central suggests that use of Boolean has some advantage over natural language searching for this particular database. For all other databases, there was no benefit to using Boolean over natural language searching in retrieving relevant results. This was the case when considering both the first twenty-five search results and the first page of results. There is variation within databases. In general, Boolean and natural language searches yielded different (unique) results within a given database. Among the top 25 search results within each database, the overlap of Boolean and natural language search results was quite small regardless of whether or not the results were relevant. A student performing a Boolean search would find different results from a student using natural language even though average relevance would be similar. The overlap percent- age of highly relevant results (those scoring a 3 out of 3 on the rubric), varied widely by database. This suggests that, for Google Scholar, Scopus, and Web of Science, it would benefit the searcher to do both a Boolean and natural language search if he or she wanted to get all of the most relevant results. However, for other databases, like Academic Search Premier, ProQuest Central, and PubMed, a searcher would be able to locate the same highly relevant results by using either of the two search strategies. How accurate are database relevance ranking algorithms (on and beyond the first page) and should librarians be concerned if students do not go beyond the first page? First-year and introductory research assignments might typically ask students to locate only a FIGURE 8 Precision (Ratio of Relevant to Nonrelevant) A Rubric Score of 2 or 3 of the Top 25 Results 530 College & Research Libraries May 2018 handful of relevant articles on a given topic. Based on this study, one page of results, and certainly the first 25, may be enough to satisfy the needs of a first-year research assignment. We found little difference in relevance further down the page, with the top third of results slightly outperforming the bottom third. This is helpful for teach- ing librarians to know, especially, as mentioned, studies have found the majority of students do not look past the first page of search results46 nor do they critically evaluate results further down that first page.47 This is not surprising given that convenience is a major factor in the way students search for information.48 The relevance of the first third of the first page of results provides evidence that librarians may not need to be too concerned about first-year students stopping after examining the first-page (or first half of the first page) of results. Is there any advantage to using filters such as the scholarly article limiter? The high overlap percentages in PubMed, Scopus, and Web of Science indicate filtering has little effect on the diversity of the results. JSTOR and LexisNexis Academic were in the middle, indicating filters may have some effect. For teaching librarians, the low overlap percent- age in first-year databases ProQuest Central and Academic Search Premier, Boolean search, indicate filters had a greater effect than in other databases tested. However, as stated previously, the high overlap for highly relevant results (rubric level 3) in Academic Search Premier indicate that, regardless of search type, first-year students would not necessarily notice that nuance. Long live the Boolean? In what circumstances might Boolean be beneficial for un- dergraduate students? While outside the scope of this paper, Boolean may be more important for upper-level students or students with more complex research needs. For example, a student conducting an extensive literature review aiming for high recall (such as locating all possible relevant articles on a topic) or a student in a specialized discipline such as business where there may be many interrelated factors to consider (for example: industry, stakeholders, NAICS codes49). The Framework for Information Literacy for Higher Education does not explicitly address teaching Boolean operators, databases, or other technologies. In the past, the Information Literacy Competency Standards for Higher Education included Boolean operators as a performance indicator of a student who constructs and implements ef- fectively designed search strategies. The Framework only implies the use of Boolean in the context of a learner using different types of searching language, a knowledge practice of the Frame “Searching as Strategical Exploration.” Regardless of whether the omission of the term Boolean operator in the Framework is significant, its absence allows librarians to decide their own approaches. Just as the Framework asks learners to be more critical in their thinking, it also asks librarians to be flexible and creative. It may also be worth noting that database technology and search engine optimization through algorithms and other means has changed significantly since ACRL published the Standards in 2000. As technology continues to change, librarians must continually evaluate how to use it to be most effective in their instruction. One limitation of this study is that librarians did it without the direct involvement of student searchers. Although the search questions came from actual student research questions received by the authors, there is the possibility of benevolent bias. A student- centered study would add to this topic and is something the authors are exploring. Another limitation is that the present study only explored the Boolean operator AND. Additional research on the performance of the operators NOT and OR versus natural language is warranted. An additional area of study is to examine in greater depth exactly when, and under what circumstances, Boolean should be taught especially within the context of the Framework for Information Literacy. The Boolean Is Dead, Long Live the Boolean! 531 Conclusion While there are differences between and within databases, overall, a first-year, intro- ductory student using virtually any database may not need instruction on Boolean search logic, as there is no clear advantage to Boolean in terms of retrieving relevant results on a topic. This is especially true for “first-year” databases such as Academic Search Premier and Google Scholar. As discussed, it can take a considerable amount of class time to teach Boolean logic,50 and it is difficult for first-year, introductory students to understand and implement correctly.51 This study found no clear advantage in relevance of results between natural language and Boolean searching, suggesting that, for introductory courses, librarians can spend less time covering the mechanical “how to” aspect of searching and more time on other, more substantial information literacy concepts such as topic and ques- tion development (including search terms and terminology) and source evaluation. This approach supports the literature and the Framework, where teaching librarians express a desire to have more class time to get at those meatier concepts52 and research on the Framework indicates lower-level students can grasp these concepts.53 Topic development and source evaluation in particular stand out, as 84 percent of students in a 2010 Project Information Literacy study stated that getting started was the hardest part of the research process,54 and a 2016 Stanford study found students had difficulty evaluating information they found on the web.55 Searching is important, and teaching students to search is important, but these results demonstrate that teaching librarians can transition to focusing on more complex issues related to searching. For example, the thought process behind choosing search terms rather than the intricacies of how to link them together in a database. Notes 1. Association of College and Research Libraries, “Information Literacy Competency Stan- dards for Higher Education” (2000), available online at www.ala.org/acrl/standards/information- literacycompetency [accessed 25 April 2017]. 2. Association of College and Research Libraries, “Framework for Information Literacy for Higher Education” (Jan. 11, 2016), available online at http://www.ala.org/acrl/standards/ilframe- work [accessed 25 April 2017]. 3. Thomas Mann, A Guide to Library Research Methods (New York, N.Y.: Oxford University Press: 1987). 4. Lynn Sillipigni Connaway, Timothy J. Dickey, and Marie L. Radford, “‘If It Is Too Inconve- nient I’m Not Going after It’: Convenience as a Critical Factor in Information-Seeking Behaviors,” Library & Information Science Research 33 (2011): 179–90. 5. Barbara Valentine, ‘‘Undergraduate Research Behavior: Using Focus Groups to Generate Theory,’’ Journal of Academic Librarianship 19, no. 5 (1993): 300–04; Gloria J. Leckie, ‘‘Desperately Seeking Citations: Uncovering Faculty Assumptions about the Undergraduate Research Process,’’ Journal of Academic Librarianship 22, no. 3 (1996): 201–08; Lisa M. Given, ‘‘The Academic and the Everyday: Investigating the Overlap in Mature Undergraduates’ Information-Seeking Behaviours,’’ Library & Information Science Research 24, no. 1 (2002): 17–29; Christine Urquhart and Jennifer Rowley, ‘‘Understanding Student Information Behavior in Relation to Electronic Information Services: Lessons from Longitudinal Monitoring and Evaluation, Part 2,’’ Journal of the American Society for Information Science and Technology 58, no. 8 (2007): 1188–97; Claire Warwick, Jon Rim- mer, Ann Blandford, Jeremy Gow, and George Buchann, ‘‘Cognitive Economy and Satisficing in Information Seeking: A Longitudinal Study of Undergraduate Information Behavior,’’ Journal of the American Society for Information Science and Technology 60, no. 12 (2009): 2402–15. 6. Laura A. Granka, Thorsten Joachims, and Geri Gay, “Eye-Tracking Analysis of User Behavior in WWW Search,” in Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (ACM, 2004), 479; Panos Balatsoukas and Ian Ruthven, “An Eye-Tracking Approach to the Analysis of Relevance Judgments on the Web: The Case of Google Search Engine,” Journal of the American Society for Information Science and Technology 63, no. 9 (2012): 1728–46; Andrew D. Asher, Lynda M. Duke, and Suzanne Wilson, “Paths of Discovery: Comparing the Search Effectiveness of EBSCO Discovery Service, Summon, Google Scholar, http://www.ala.org/acrl/standards/informationliteracycompetency http://www.ala.org/acrl/standards/informationliteracycompetency http://www.ala.org/acrl/standards/ilframework http://www.ala.org/acrl/standards/ilframework 532 College & Research Libraries May 2018 and Conventional Library Resources,” College & Research Libraries 74, no. 5 (2013): 464–88; Helen Georgas, “Google vs. the Library (Part II): Student Search Patterns and Behaviors When Using Google and a Federated Search Tool,” portal: Libraries and the Academy 14, no. 4 (2014): 503–32; Craig Silverstein, Hannes Marais, Monika Henzinger, and Michael Moricz, “Analysis of a Very Large Web Search Engine Query Log,” ACM SIGIR Forum 33, no. 1 (1999): 6–12. 7. Megan Dempsey and Alyssa M. Valenti, “Student Use of Keywords and Limiters in Web- scale Discovery Searching,” Journal of Academic Librarianship 42 (2016): 200–06, 204. 8. Micheline Hancock-Beaulieu, “User Friendliness and Human-Computer interaction in Online Library Catalogues,” Program 26, no. 1 (1992): 29–37; Charles R. Hildreth, “To Boolean or Not to Boolean,” Information Technology and Libraries 2, no. 3 (1983): 235–37; Charles R. Hildreth, “The Use and Understanding of Keyword Searching in a University Online Catalog,” Information Technology and Libraries 16, no. 2 (June 1997): 52–62; Bernard J. Jansen, Amanda Spink, and Tefko Saracevic, “Real Life, Real Users, and Real Needs: A Study and Analysis of User Queries on the Web,” Information Processing & Management 36, no. 2 (2000): 207–27; Gabriel K. Rousseau, Brian A. Jamieson, Wendy A. Rogers, Sherry E. Mead, and Richard A. Sit, “Assessing the Usability of On-Line Library Systems,” Behaviour & Information Technology 17, no. 5 (1998): 274–81. 9. Beth Bloom and Marta Mestrovic Deyrup, “The SHU Research Logs: Student Online Search Behaviors Trans-scripted,” Journal of Academic Librarianship 41, no. 5 (2015): 593–601; Jerome Dinet, Monik Favart, and Jean-Michel Passerault, “Searching for Information in an Online Public Access Catalogue (OPAC): The Impacts of Information Search Expertise on the Use of Boolean Opera- tors,” Journal of Computer Assisted Learning 20 (2004): 338–46; Georgas, “Google vs. the Library (Part II)”; Eng Pwey Lau and Dion Hoe-Lian Goh, “In Search of Query Patterns: A Case Study of a University OPAC,” Information Processing & Management 42, no. 5 (2006): 1316–29; Aphrodite Malliari and Daphne Kyriaki-Manessi. “Users’ Behaviour Patterns in Academic Libraries’ OPACs: A Multivariate Statistical Analysis,” New Library World 108, no. 3/4 (2007): 107–22; Eric Novotny and Ellysa Stern Cahoy, “If We Teach, Do They Learn? The Impact of Instruction on Online Catalog Search Strategies,” portal: Libraries and the Academy 6, no. 2 (2006): 155–67. 10. Jansen, Spink, and Saracevic, “Real Life, Real Users, and Real Needs”; Lau and Goh, “In Search of Query Patterns”; Silverstein, Marais, Henzinger, and Moricz, “Analysis of a Very Large Web Search Engine Query Log.” 11. Nicholas J. Belkin et al., “Rutgers’ TREC 2001 Interactive Track Experience,” in TREC-2001, Proceedings of the Tenth Text Retrieval Conference, eds. D. Harman and E. Voorhees (Washington, D.C.: Government Printing Office, 2001); Diane Kelly, Vijay Deepak Dollu, and Xin Fu, “The Lo- quacious User: A Document-Independent Source of Terms for Query Expansion,” in Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (ACM, 2005): 457–64. 12. Qian Ying Wang, Clifford Nass, and Jiang Hu, “Natural Language Query vs. Keyword Search: Effects of Task Complexity on Search Performance, Participant Perceptions, and Prefer- ences,” in IFIP Conference on Human-Computer Interaction (Springer Berlin Heidelberg, 2005), 106–16. 13. Robert S. Taylor, “The Process of Asking Questions,” American Documentation 13, no. 4 (1962): 391–96. 14. Robert M. Losee and Lee Anne H. Paris, “Measuring Search-Engine Quality and Query Difficulty: Ranking with Target and Freestyle,” Journal of the Association for Information Science and Technology 50, no. 10 (1999): 882–89. 15. Abraham Bookstein, “Probability and Fuzzy-Set Applications to Information Retrieval,” Annual Review of Information Science & Technology 20 (1985): 117–51; William S. Cooper, “Getting Beyond Boole,” Information Processing & Management 24, no. 3 (1988): 243–48; Ricardo Baeza-Yates and Berthier Ribeiro-Neto, Modern Information Retrieval (New York, N.Y.: ACM Press, 1999), 279; Heting Chu, Information Representation and Retrieval in the Digital Age, 2nd ed. (Medford, N.J.: Information Today, Inc, 2010), 71; G.G. Chowdhury, Introduction to Modern Information Retrieval, 3rd ed. (New York, N.Y.: Neal-Schuman Publishers, 2010), 206; Ray R. Larson, “Information Retrieval Systems,” in Encyclopedia of Library and Information Sciences, 3rd ed., ed. Marcia J. Bates (Boca Raton, Fla.: CRC Press, 2010), 2557; Birger Hjorland, “Classical Databases and Knowledge Organization: A Case for Boolean Retrieval and Human Decision-Making During Searches,” Journal of the Association for Information Science and Technology 66, no. 8 (2015): 1559–75. 16. Christine L. Borgman, Donald O. Case, and Charles T. Meadow, “Evaluation of a System to Provide Online Instruction and Assistance in the Use of Energy Databases: The DOE/OAK Project,” Proceedings of the 49th ASIS Annual Meeting 23 (1986): 32–38; Christine L. Borgman, Donald O. Case, and Charles T. Meadow, “The Design and Evaluation of a Front-End User Interface for Energy Researchers,” Journal of the American Society for Information Science 40, no. 2 (1989): 99–109. 17. Losee and Paris, “Measuring Search-Engine Quality and Query Difficulty,” as started in legal databases in the ‘90s. See, for example, Steve Arnold and Lawrence Rosen, “Bye, Boolean: The Boolean Is Dead, Long Live the Boolean! 533 Natural Language and Electronic Information Retrieval,” Searcher 1, no. 5 (Oct. 1993): 30; Maxine Hattery, “Hot Topics from Online ‘93: The Internet & Beyond and Natural Language,” Information Retrieval & Library Automation 29, no. 6 (1993): 1–3. 18. Losee and Paris, “Measuring Search-Engine Quality and Query Difficulty”; Lee Anne H. Paris and Helen R. Tibbo, “Freestyle vs. Boolean: A Comparison of Partial and Exact Match Retrieval Systems,” Information Processing & Management 34, no. 2 (1998): 175–90. 19. William R. Hersh and David H. Hickam, “An Evaluation of Interactive Boolean and Natural Language Searching with an Online Medical Textbook,” Journal of the American Society for Informa- tion Science 46, no. 7 (1995): 478–89. 20. Howard Turtle, “Natural Language vs. Boolean Query Evaluation: A Comparison of Retrieval Performance,” in Proceedings of the 17th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (New York, N.Y.: Springer-Verlag, 1994): 212–20. 21. Nigel Ford, David Miller, and Nicola Moss, “Web Search Strategies and Retrieval Effective- ness: An Empirical Study,” Journal of Documentation 58, no. 1 (2002): 30–48. 22. Vanessa Murdock, Diane Kelly, W. Bruce Croft, Nicholas J. Belkin, and Xiaojun Yuan, “Iden- tifying and Improving Retrieval for Procedural Questions,” Information Processing & Management 43, no. 1 (2007): 181–203. 23. Dianne Cmor and Xin Li, “Beyond Boolean, Towards Thinking: Discovery Systems and Information Literacy,” Library Management 33, no. 8/9 (2012): 450–57. 24. Stefanic Buck and Margaret Mellinger, “The Impact of Serial Solutions’ Summon™ on Information Literacy Instruction: Librarian Perceptions,” Internet Reference Services Quarterly 16, no. 4 (2011): 159–81. 25. Dempsey and Valenti, “Student Use of Keywords and Limiters.” 26. Helen K. Burns and Susan M. Foley, “Building a Foundation for an Evidence-Based Ap- proach to Practice: Teaching Basic Concepts to Undergraduate Freshman Students,” Journal of Professional Nursing 21, no. 6 (Nov.–Dec. 2005): 351–57. 27. Barbara Quarton, “Research Skills and the New Undergraduate,” Journal of Instructional Psychology 30, no. 2 (2003): 120–24. 28. Meagan Lacy and Hsin-liang Chen, “Rethinking Library Instruction: Using Learning- Outcome Based Design to Teach Online Search Strategies,” Journal of Information Literacy 7, no. 2 (2013): 126–48, 136. 29. Eric Novotny and Ellysa Stern Cahoy, “If We Teach, Do They Learn? The Impact of Instruc- tion on Online Catalog Search Strategies,” portal: Libraries and the Academy 6, no. 2 (2006): 155–67. 30. Rita Vine, “Real People Don’t Do Boolean: How to Teach End Users to Find High-Quality Information on the Internet,” Information Outlook 5 (2001): 17–23. 31. Association of College and Research Libraries, “Information Literacy Competency Stan- dards.” 32. Association of College and Research Libraries, “Framework for Information Literacy.” 33. Colleen Burgess, “Teaching Students, Not Standards: The New ACRL Information Literacy Framework and Threshold Crossings for Instructors,” Partnership: The Canadian Journal of Library and Information Practice and Research 10, no. 1 (2015): 1–6; Trudi E. Jacobson and Craig Gibson, “First Thoughts on Implementing the Framework for Information Literacy,” Communications in Information Literacy 9, no. 2 (2015): 102–10. 34. Rachel E. Scott, “Part 1. If We Frame It, They Will Respond: Undergraduate Student Re- sponses to the Framework for Information Literacy for Higher Education,” Reference Librarian 58, no. 1 (2017): 1–18; Rachel E. Scott, “Part 2. If We Frame It, They Will Respond: Student Responses to the Framework for Information Literacy for Higher Education,” Reference Librarian 58, no. 1 (2017): 19–32. 35. Nancy M. Foasberg, “From Standards to Frameworks for IL: How the ACRL Framework Addresses Critiques of the Standards,” portal: Libraries and the Academy 15, no. 4 (2015): 699–717. 36. While Google Scholar is a search engine, for the purposes of this study we refer to it as a database. 37. William H. Walters, “Google Scholar Search Performance: Comparative Recall and Preci- sion,” portal: Libraries and the Academy 9, no. 1 (2009): 5–24; William H. Walters, “Comparative Recall and Precision of Simple and Expert Searches in Google Scholar and Eight Other Databases,” portal: Libraries and the Academy 11, no. 4 (2011): 971–1006. 38. Simona Ştirbu, Paul Thirion, Serge Schmitz, Gentiane Haesbroeck, and Ninfa Greco, “The Utility of Google Scholar When Searching Geographical Literature: Comparison with Three Com- mercial Bibliographic Databases,” Journal of Academic Librarianship 41, no. 3 (2015): 322–29. 39. Leslie S. Adriaanse and Chris Rensleigh, “Comparing Web of Science, Scopus and Google Scholar from an Environmental Sciences Perspective,” South African Journal of Libraries & Information Science 77, no. 2 (2011): 169–78; Michal E. Anders and Dennis P. Evans, “Comparison of PubMed and Google Scholar Literature Searches,” Respiratory Care 55, no. 5 (2010): 578–83; Matthew E. 534 College & Research Libraries May 2018 Falagas, Eleni I. Pitsouni, George A. Malietzis, and Georgios Pappas, “Comparison of PubMed, Scopus, Web of Science, and Google Scholar: Strengths and Weaknesses,” FASEB Journal 22, no. 2 (2008): 338–42; Eva Nourbakhsh, Rebecca Nugent, Helen Wang, Cihan Cevik, and Kenneth Nugent, “Medical Literature Searches: A Comparison of PubMed and Google Scholar,” Health Information & Libraries Journal 29, no. 3 (2012): 214–22; Salimah Z. Shariff et al., “Retrieving Clinical Evidence: A Comparison of PubMed and Google Scholar for Quick Clinical Searches,” Journal of Medical Internet Research 15, no. 8 (2013): e164; Mary Shultz, “Comparing Test Searches in PubMed and Google Scholar,” Journal of the Medical Library Association: JMLA 95, no. 4 (2007): 442–53. 40. Asher, Duke, and Wilson, “Paths of Discovery”; Georgas, “Google vs. the Library (Part II).” 41. Georgas, “Google v the Library (Part II),” 524. 42. There was a significant effect of unfiltered versus filtered (Boolean and natural language) at the P < .05 level for the conditions [F(2,22) = 5.613, P = .011]. A follow-up independent-samples t-test was conducted to compare ProQuest filtered versus unfiltered Boolean and filtered versus unfiltered natural language. Both were significant at P < .05. 43. Note: due to differences in how results were recorded by interrater pairs, the overlap analysis was not able to be calculated for one reviewer for JSTOR, LexisNexis Academic, and ProQuest Central. Because content in some databases changes daily, it was impossible to recreate the searches. However, because the overlap analysis examined overlap between a single reviewer’s results, and not results between reviewers, it still provides evidence of the regularly changing database results. 44. Ibid. 45. Ibid. 46. Georgas, “Google v the Library (Part II)”; Asher, Duke, and Wilson, “Paths of Discovery.” 47. Georgas, “Google v the Library (Part II).” 48. Connaway, Dickey, and Radford, “‘If It Is Too Inconvenient’.” 49. North American Industry Classification System (NAICS). 50. For example, Cmor and Li, “Beyond Boolean.” 51. For example, Dempsey and Valenti, “Student Use of Keywords and Limiters.” 52. Cmor and Li, “Beyond Boolean”; Buck and Mellinger, “The Impact of Serial Solutions’ Summon.” 53. Scott, “Part 1. If We Frame It”; Scott, “Part 2. If We Frame It.” 54. Alison J. Head and Michael B. Eisenberg, “Truth Be Told: How College Students Evaluate and Use Information in the Digital Age,” Project Information Literacy Progress Report, University of Washington Information School, 2010. 55. Stanford History Education Group, “Evaluating Information: The Cornerstone of Civic Online Reasoning” (2011), available online at https://sheg.stanford.edu/upload/V3LessonPlans/ Executive%20Summary%2011.21.16.pdf [accessed 25 April 2017]. https://sheg.stanford.edu/upload/V3LessonPlans/Executive%20Summary%2011.21.16.pdf https://sheg.stanford.edu/upload/V3LessonPlans/Executive%20Summary%2011.21.16.pdf _GoBack