Evidence Summary

 

Implementation of Proactive Chat Increases Number and Complexity of Reference Questions

 

A Review of:

Maloney, K., & Kemp, J. H. (2015). Changes in reference question complexity following the implementation of a proactive chat system: Implications for practice. College & Research Libraries, 76(7), 959-974. http://dx.doi.org/10.5860/crl.76.7.959

 

Reviewed by:

Sue F. Phelps

Health Sciences and Outreach Services Librarian

Washington State University Vancouver Library

Vancouver, Washington, United States of America

Email: asphelps@vancouver.wsu.edu

 

Received: 2 Mar. 2017    Accepted: 17 Apr. 2017

 

 

cc-ca_logo_xl 2017 Phelps. This is an Open Access article distributed under the terms of the Creative CommonsAttributionNoncommercialShare Alike License 4.0 International (http://creativecommons.org/licenses/by-nc-sa/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly attributed, not used for commercial purposes, and, if transformed, the resulting work is redistributed under the same or similar license to this one.

 

Abstract

 

Objective – To determine whether the complexity of reference questions has changed over time; whether chat reference questions are more complex than those at the reference desk; and whether proactive chat increases the number and complexity of questions.

 

Design – Literature review and library data analysis.

 

Setting – Library of a doctoral degree granting university in the United States of America.

 

Methods – The study was carried out in two parts. The first was a meta-analysis of published data with empirical findings about the complexity of questions received at library service points in relationship to staffing levels. The authors used seven studies published between 1977 and 2012 from their literature review to create a matrix to compare reference questions based on the staffing level required to answer the questions (e.g., by a nonprofessional, a generalist, or a librarian). They present these articles in chronological order to illustrate how questions have changed over time. They sorted questions by the service point at which they were asked, either through chat service or at a reference desk.

 

In the second part of the study authors used the READ scale to categorize the complexity of questions asked at the reference desk and via proactive chat reference. They collected data for chat reference for six one-week periods over the course of eight months to provide a representative sample. They recorded reference desk questions for three of those same weeks. Both evaluators scored the data for a single week to norm their results, while the remaining data was coded independently.

 

Main Results – The complexity of questions in the seven articles studied indicated change over time, shown in tables for desk and chat reference. One outlier, a study published in 1977 before reference tools and resources moved online, reported that 62% of questions asked could be answered by nonprofessionals, 38% by a trained generalist, and only 6% required a librarian. The six other studies were published after 2001 when most resources had moved online. Of the questions from these six, authors found a range of 74-90% could be answered by a non-professional, 12-16% by a generalist, and 0-11% required a librarian. Once chat reference was added there was more variation reported between studies, with generalist questions at 30-47% of those reported and 10-23% requiring a librarian.

 

Though the underlying differences in the study designs do not allow for formal analysis, the seven studies indicate that more complex questions are asked via chat service than at the reference desk. Each staffing level was grouped and averaged for comparison. The 1977 study shows nonprofessional questions at 62%, generalist questions at 32%, and librarian questions at 6%. Reference desk questions in the post-2001 articles indicated 81% nonprofessional, 13% generalist, and 5% librarian questions. Post-2001 chat questions were at 49% nonprofessional, 36% generalist, and 15% at librarian level.

 

In the second part of the study, the data coded using the READ scale and collected from the proactive chat system showed an increased number and complexity of questions. The authors identified 4% of questions were rated at a level 1 (e.g., directional, library hours), 30% at level 2 (e.g., known item searching), 39% at level 3 (e.g., reference questions), and 27% at level 4 requiring advanced expertise (e.g., using specialized databases or data sets). Authors combined questions at levels 5 and 6 due to low numbers, and did not describe these when reporting their study. In comparison, 15% of reference desk questions were at a level 3 on the READ scale, and 1% were at level 4.

 

Conclusion – Proactive chat reference service increased the number and the complexity of questions over those received via the reference desk. The frequency of complex questions was too high for nonprofessional staff to refer questions to librarians, causing reevaluation of the tiered service model. Further, this study demonstrates that users still have questions about research, but for users to access services for these questions “reference service must be proactive, convenient, and expert to meet user expectations and research needs” (p. 972).

 

Commentary

 

The authors have made excellent use of library literature to create a matrix for the evaluation of the complexity of questions at different service points based on the expertise needed to answer questions. Though there is already much published about online reference services, the use of proactive chat reference is just appearing in the literature, so a more thorough explanation of the service would have been useful. For example, Zhang and Mayer’s (2014) description of proactive chat provides appropriate context.

 

When evaluated using Glynn’s (2006) critical appraisal tool, this study is valid, with scores >75% in each section: data collection (83%), study design (80%) and results (83%). The overall score for validity was 82%.

 

The study was conducted in two parts: the meta-analysis of the literature and an analysis of data collected by the author’s library. The authors do not provide much detail on how they conducted the meta-analysis of the literature to address the first two research questions, though they do report on their rationale for the seven studies selected for analysis. They included the matrix created from those seven articles with outcomes that are clearly described in tables, graphs, and narrative form.

 

Their data collection is clearly described for the third research question, making it easier to duplicate. To determine the complexity of reference questions they conducted their analysis using the READ scale, a validated instrument. They gathered data from chat transcripts and from the reference desk over representative times. The two researchers coded the reference questions after going through a norming process. However, the data from chat transcripts may present more objective data than the data collected at the reference desk, where different librarians could interpret the level of questions differently.

 

A significant finding to academic librarians is that patrons still have complex research questions that they are willing to ask through a proactive chat service. This study gives librarians “the opportunity to once again provide individual reference service at the point of need” (p. 972). It also raises the practical issue of increased staffing to manage increased chat activity. Since the questions that arrive via proactive chat tend to be more complex, it is possible that more librarians, instead of non-professional staff, will be required, adding to already tight library budgets.

 

References

 

Glynn, L. (2006). A critical appraisal tool for library and information research. Library Hi Tech, 24(3), 387-399. http://dx.doi.org/10.1108/07378830610692154

 

Zhang, J. and Mayer, N. (2014). Proactive chat reference. College & Research Library News, 75(4), 202-205.