Article

 

Iterative Chat Transcript Analysis: Making Meaning from Existing Data

 

Steven Baumgart

Head of Memorial Library Public Services

University of Wisconsin-Madison Libraries

Madison, Wisconsin, United States of America

Email: steven.baumgart@wisc.edu

 

Erin Carillo

Information Services Librarian

University of Wisconsin-Madison Libraries

Madison, Wisconsin, United States of America

Email: erin.carrillo@wisc.edu 

 

Laura Schmidli

Information Services Librarian

University of Wisconsin-Madison Libraries

Madison, Wisconsin, United States of America

Email: laura.schmidli@wisc.edu

 

Received: 12 Feb. 2016   Accepted: 11 Mar. 2016 

 

 

cc-ca_logo_xl 2016 Baumgart, Carillo, and Schmidli. This is an Open Access article distributed under the terms of the Creative CommonsAttributionNoncommercialShare Alike License 4.0 International (http://creativecommons.org/licenses/by-nc-sa/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly attributed, not used for commercial purposes, and, if transformed, the resulting work is redistributed under the same or similar license to this one.

 

 

Abstract

 

Objective – In order to better contextualize library data about patron satisfaction with reference services, we analyzed an existing corpus of chat transcripts. Having conducted a similar analysis in 2010, we also compared librarian behaviors over time.

 

Methods – Drawing from the library literature, we identified a set of librarian behaviors closely associated with patron satisfaction. These behaviors include listening to and understanding patrons’ needs, inviting patrons to use the service again, and providing instruction or completing a search for patrons. Analysis of the chat transcripts included establishing a coding schema, applying these codes to individual chat transcripts, and analyzing these codes across the corpus of transcripts for frequency and correlation with other codes. The currently presented analysis used chat transcripts from the fall of 2013 and seeks changes in librarian behavior over time in order to gauge the success of establishing best practices and improving training standardization over the last three years.

 

Results – The analysis shows that librarian behaviors have changed over time, pointing to what campus librarians are doing well, and that implementation of best practices at a campus level after the 2010 analysis may have increased these positive behaviors. The analysis also shows opportunities for further standardization and reinforcement of best practices.

 

Conclusion – Qualitative analysis of already-collected data serves as a model for other units and suggests areas for process improvement, including enhanced coder training and code schema design. Further analysis of chat patrons’ questions is also warranted, including investigation of the relationship between subject- and location-specific questions and referrals.

 


 

Introduction

 

Twice each year, University of Wisconsin-Madison campus libraries participate in a public service data gathering week, during which each library is encouraged to record all public service interactions. These sweeps weeks occur during the tenth week of the fall semester and the seventh week of the spring semester. They generate a corpus of chat interactions that are recorded and retained. In 2010, the Library’s Reference Assessment Working Group decided to analyze this data set to assess the quality of our campus reference service.

 

The Reference Assessment Working Group is composed of three to six librarians from different libraries on campus and is charged with coordinating each sweeps week and reporting about this data twice per year. This group decided to analyze chat transcripts in order to better contextualize and add qualitative data to this report. For the analysis, the group used chat transcripts from the general campus queue, which is the main point of entry into chat for UW-Madison users. The main goal of this analysis was to discover patterns of librarian and patron behavior, particularly as our chat reference service had become increasingly busy over the previous years.

 

While this first analysis using 2010 chat transcripts included 28 codes, indicating a variety of behaviors, the main focus was to identify and measure librarian behaviors associated with patron satisfaction as identified at the University of South Florida (Kwon and Gregory, 2007.) This focus was retained even as the coding schema was simplified for the 2013 iteration.

 

Methods

 

Text transcripts of chat interactions from the general campus library chat queue that occurred in the tenth week of the fall semester between November 4 and 10, 2013, were used in this analysis. A similar analysis was conducted in 2010 that also used general queue chat transcripts from the same week of the fall semester, from November 7 through November 13, 2010 (Reference Assessment Working Group, 2010).

 

Preparing transcripts for analysis involved downloading transcripts from our chat software, converting transcripts to text files, and stripping transcripts of any identifying information. The transcripts were then individually imported into R using the RQDA package, an open-source statistical analysis software program that was pre-loaded with all codes to be used in the analysis.

 

The analysis was conducted by four graduate students in the School of Library and Information Studies who worked at three different campus library locations. Prior to beginning to apply codes, these students participated in a one-hour group training and calibration session with the three librarians leading the analysis. Student coders also had access to a screencast tutorial and were oriented to the software and process at their individual library.

 

In order to establish inter-rater reliability scores for each code, one of the principal investigators separately coded 10% of the transcripts, which were compared to those coded by students. Cohen’s kappa (Landis and Koch, 1977; Banerjee et al., 1999) was used to establish levels of reliability at the file and code levels in both the 2010 and 2013 analyses. The file level Cohen’s kappa values ignore the frequency of codes and views a transcript as either tagged or not tagged with a specific code. The code level Cohen’s kappa values take the frequency of codes into account, but it was not used in this analysis.

 

As in our previous analysis, we used common thresholds for Cohen’s kappa to interpret the meaning of magnitude, establishing a four-part scale including poor (Cohen’s kappa < 0.40), moderate (between 0.41 and 0.60), good (between 0.61 and 0.80), and very good (> 0.80). These values are represented in Figure 1 using orange dots, where dots higher on the y-axis represent a higher level of agreement.

 

Codes applied in the analysis were based upon those used in our previous 2010 analysis. Twenty-eight codes were used in the previous iteration, which seemed to be overly complicated based on relatively low levels of inter-rater reliability. For the 2013 analysis, the principal investigators decided to simplify the coding schema. First, codes that correlated strongly with user satisfaction, according to both the RUSA guidelines (Reference and User Services Association, 2004) and the Kwon and Gregory study (2007), were retained. Remaining codes with the lowest levels of reliability in the 2010 analysis were then examined, and either scope notes were improved, or the codes were combined into larger, simplified categories. Finally, codes that were no longer relevant were eliminated. This process resulted in 14 codes that were applied to our 2013 chat transcripts. The codes are outlined in detail in “Appendix A Coding Scope Notes.”

 

 

Figure 1

All codes by percent occurrence for 2013.

 

 

Results

 

In total, 403 chat transcripts were analyzed, with a confidence level of 95%. Fourteen codes were applied to these transcripts in the 2013 iteration. All codes are shown in Figure 1.

 

Codes were organized into four categories, based on their inter-rater reliability scores: very good, good, moderate, and poor. Codes classified within the poor category, with Cohen’s kappa scoring of less than 0.40, were not considered usable in this study.

 

Codes with very good reliability, shown in Figure 2, indicated that librarians greeted the patron (greeting), gave their name (name_librarian), gave the name of their library (name_library), and asked patrons to use the chat service again (comeback_again). The code that identified problem transcripts also had very good reliability between coders. This included transcripts that indicated technical difficulties, were incomplete, or included inappropriate patron behavior.

 

Codes with good reliability, shown in Figure 3, indicated that librarians listened to patrons, asked clarifying questions and generally checked to make sure they understood the patron question (listening_and_questioning), and referred the patron to a different service point (referral_services). The code initial_question also had good reliability. This code was used to mark the patron’s initiating question or problem that prompted the chat interaction.

 

 

Figure 2

Percent occurrence of codes with very good inter-rater reliability in 2013.

 

Figure 3

Percent occurrence of codes with good inter-rater reliability in 2013.

 

 

Codes with moderate reliability, seen in Figure 4, indicate that librarians provided instruction to patrons on how to complete a task (instruction) and searched for patrons (searching_for_patron). The code library_specific also had moderate reliability and was used to mark patron questions requiring specific knowledge from a subject specialist or specific library.

 

 

Figure 4

Percent occurrence of codes with moderate inter-rater reliability in 2013.

 

 

Codes with poor reliability cannot be used to draw meaningful conclusions and are shown in Figure 5. These codes were applied inconsistently between coders and include those that designate that librarians checked on a patron’s progress or acknowledged their own progress toward answering a question (maintain_contact) or referred patrons to another mode of reference service, such as email or in-person services (referral_mode). The code explicit_compliment also had poor reliability, though this is of less concern as it was primarily intended to flag patron comments to be used in marketing.

 

Figure 5

Percent occurrence of codes with poor inter-rater reliability in 2013.

 

 

Codes that are highly correlated with user satisfaction and have acceptable levels of reliability were also separated out and are shown in Figure 6. These include codes that indicate that librarians listened to patrons, asked clarifying questions and generally checked to make sure they understood the patrons’ question (listening_and_questioning), asked patrons to use the chat service again (comeback_again), provided instruction on how to complete a task (instruction), and searched for patrons (searching_for_patron).

 

 

Figure 6

Percent occurrence of codes with acceptable inter-rater reliability that influence user satisfaction in 2013.

 

 

Finally, only one code associated with user satisfaction—maintain_contact—had poor reliability and could not be included in this analysis. This code indicates that a librarian checked on a patron’s progress or acknowledged their own progress toward answering a question. This code will need to be improved in order to be used in future analyses.

 

Discussion

 

The purpose of this analysis was to build upon the previous analysis, examining how the 2010 analysis and accompanying report may have changed librarian behaviors. We are specifically interested in charting those behaviors over time that correlate with user satisfaction, examining how often subject-specific questions occur over chat, and discovering how often chat questions are referred to other service points and modes of contact. Our focus in identifying these behaviors is to improve training and update best practices, as needed, to ensure user satisfaction in the future. Finally, we also had an interest in improving our coding process in terms of efficiency and inter-rater reliability, possibly serving as an example to other groups on campus interested in qualitative analysis.

 

Codes Eliminated for the 2013 Analysis

 

In 2010, we analyzed both how often patrons gave their names and how often librarians used patrons’ names. We chose not to track this behavior in the current analysis as this behavior is relatively rare and not correlated with increased user satisfaction.

 

The prior analysis also coded transcripts that contained questions of a general nature that can be answered by a majority of librarians in order to identify questions that were appropriate for our general chat queue. In 2010, over 83% of transcripts received this tag. For the 2013 analysis, we decided it was more important to mark only questions that, inversely, required specific subject-area knowledge or knowledge specific to a library location. Our main interest lay in charting how often these questions requiring specialized knowledge occur and how often they are referred from our main service point. In the 2013 iteration, this was indicated by the code library_specific.

 

The 2010 analysis also recorded transcripts in which the librarian was polite or encouraging, the librarian ended the chat with a closing other than inviting the patron to chat again, the patron thanked the librarian, and the patron was dissatisfied or the patron’s question was unanswered. These four codes all had relatively low inter-rater reliability in the 2010 analysis and were not correlated with patron satisfaction. All four were eliminated from the 2013 iteration.

 

Codes Added for the 2013 Analysis

 

Only one entirely new code was added for this analysis. The code initial_question was added to the schema in order to mark patrons’ initial questions or the problems that prompted them to contact the chat reference service. We anticipate doing further analysis on these initial questions separately to identify common problems and questions, or pain points. Knowledge of the specific issues for which patrons contact us may help to improve services in other areas, for example, improving instructions available on our website.

 

Analysis of Code Frequency

 

For each code applied to the transcripts, we calculated inter-rater reliability scores and also the frequency with which it was applied to our transcripts. Within the subset of codes with acceptable levels of reliability (Cohen’s kappa > 0.40), five codes were applied to more than half of the transcripts as seen in Figure 7. These represent the five most common desirable behaviors exhibited by librarians via chat. Librarians greeted patrons in 87% of interactions, searched on behalf of patrons in 72% of interactions, engaged in listening and questioning behaviors in 64% of interactions, and stated their name and their library’s name in 59% of interactions. Three out of five of these behaviors occurred more often than in our previous analysis. The remaining two codes are unfortunately not directly comparable with our 2013 codes as these two codes consolidated codes used in the 2010 analysis. The 2013 code listening_and_questioning combined the 2010 codes check_on_success, open_ended_questions, rephrasing, and clarifying_or_closed_questions. The 2013 code searching_for_patron combined this same code in 2010 with url_other. For a full list of codes used and comparison to codes used in 2010 see “Appendix A Coding Scope Notes.”

 

Most notable in these commonly applied codes from 2013 is that librarians identified both themselves and their library far more frequently than in 2010. This is also one of four librarian behaviors that is highly correlated with patron satisfaction. The increase in this behavior demonstrates that emphasis placed on this identified best practice through training and documentation after the 2010 analysis has had a positive impact on librarian behaviors. However, as best practices, these behaviors should ideally be occurring in more than 59% of interactions. There is still room for improvement.

 

 

Figure 7

Percent occurrence of codes applied to more than half of transcripts in 2013.

 

 

The code instruction in the 2013 analysis combined two codes from the 2010 analysis: instruction and url_jing. As Jing is inherently instructional in nature, these two codes were combined for the 2013 analysis. Similarly, use of non-Jing URLs by librarians was no longer explicitly tracked, but it often occurred in conjunction with searching for a patron (coded searching_for_patron) that includes librarians providing information directly to patrons. The latter still happens in a majority (72%) of interactions. In contrast, instruction occurred within 36% of interactions. Similar to the 2010 results, this indicates that librarians are still more likely to provide patrons with information directly over chat rather than teaching patrons how to obtain that information, which is likely a result of the chat medium. This relationship can be seen in Figure 8, which shows the breakdown between these two codes. Though a significant number of interactions were coded with both codes (35%), an additional 42% of interactions were coded with searching_for_patron and not with instruction. While both of these librarian behaviors are correlated with user satisfaction, they do represent different philosophies of reference service. This may be an area for future analysis.

 

In 2013, there was an increase of over 8% in librarians encouraging patrons to use the service again, denoted by the code comeback_again as seen in Figure 9. However, this code was present only in 21% of all transcripts. As this librarian behavior is highly correlated with patron satisfaction, there is room for improvement. While there are specific situations where this is difficult, for example if a patron leaves the conversation abruptly, in many chat interactions it can be added to librarians’ typical chat closing.

 

 

Figure 8

Breakdown of percent occurrence of instruction and searching_for_patron in 2013.

 

Figure 9

Percent occurrence of comeback_again in 2010 and 2013.

 

 

Finally, in the 2013 analysis as compared to 2010, approximately the same percentage of referrals to other service points were recorded. There were approximately 5% fewer transcripts coded in 2013 than in 2010 as being best answered by specific libraries or subject specialists. This data is shown in Figure 10. This indicates a decrease of over 5% in questions marked as library_specific (or best answered by subject specialists) and no decrease in referrals. The decrease in library-specific chats may be related to the establishment of additional subject-specific chat queues between the 2010 and 2013 analysis. The fact that referrals have remained constant despite a decrease in library-specific questions may indicate an increase in collaborative work among librarians at different libraries. Refining our coding in the future may be needed to more accurately analyze these behaviors.

 

As seen in Figure 11, only 15 transcripts were coded as library_specific (3%), with only five (1%) of those also coded as referral_services. Though these numbers are relatively small, this does bring into question how many subject- or library-specific questions are being referred appropriately. We reviewed these individual transcripts a second time to look for situations where a referral was appropriate but not made. In almost all cases, the specific question was adequately answered by the librarian on chat and thus not referred. In a few cases, the chat was incorrectly tagged. While we did not uncover missed opportunities for referrals, we did find some ways to refine our coding schema in the future. Namely, we need to explicitly determine appropriate coding for the following situations: a patron asks for a librarian by name, a librarian refers a patron to an entity outside of campus, and a librarian is testing or demonstrating chat services.

 

Finally, it is important to recognize that while our total sample gives a confidence of 95%, both of these codes have only moderate inter-rater reliability. By improving our coding definitions, we hope to improve the reliability of these codes in future analyses.

 

 

Figure 10

Percent occurrence of referral and subject-specific codes in 2010 and 2013.

 

Figure 11

Breakdown of percent occurrence of referral and subject-specific codes in 2013.

 

 

Analysis of Inter-Rater Reliability

 

Overall, inter-rater reliability for the 2013 analysis has improved from 2010. Five out of 14 codes (36%) exhibited very good agreement, three out of 14 (21%) exhibited good agreement and three out of 14 (21%) exhibited moderate agreement. Overall, 11 out of 14 codes (79%) in the 2013 analysis were of moderate, good, or very good reliability. Only 75% of codes were of the same reliability in the 2010 analysis as seen in Figure 12.

 

 

Figure 12

Inter-rater reliability seen as Cohen's kappa values over time.

 

 

Only one code that correlated to user satisfaction had poor agreement and was unusable. Two additional codes exhibited poor agreement but are not correlated to user satisfaction and thus not considered critical codes. This code comparison can be seen in Figure 13.

 

We attribute the overall improvement in inter-rater reliability of the 2013 codes to several factors. First, we conducted more comprehensive training and held a group session with all student coders in order to ensure everyone understood and was able to apply our codes. This session resulted in some minor adjustment of coding scope notes in order to make them more sensible for students to apply. We also used fewer individual student coders for the 2013 analysis, and we chose graduate students from the SLIS program in paid library positions with the rationale that these students would have an improved work ethic and commitment to the analysis. Finally, we drastically simplified the codes used by combining, simplifying, and eliminating codes from the 2010 analysis.

 

However, even with these improvements, three codes out of 14 had low levels of reliability. The code explicit_compliment is intended to mark out patron comments that may be useful in future marketing or promotional materials, and thus reliability is not extremely important for this code.

 

The code referral_mode is not correlated with patron satisfaction, but it is important in order to know the frequency with which our librarians refer patrons to alternate forms of communication with a librarian (e.g., phone, email, and in person.) During our one-hour training session, our coding group discussed this code and decided that it should be used to identify situations where chat doesn’t work to answer a question. The group decided it should be used in cases where supplementary material is provided through another mode (e.g., when an article is delivered via email in conjunction with chat instruction.) One coder noted that, “most librarians were able to use Jing or guided instruction, giving the students lots of time and patience even when questions were more challenging. I felt that this reflected that librarians are more comfortable with online interfaces and are able to give quality reference via chat.” This is a positive observation, but further analysis should be done to determine why the inter-rater reliability is so low for this code.

 

 

Figure 13

Inter-rater reliability of directly comparable codes in 2010 and 2013.

 

 

The code maintain_contact is correlated with patron satisfaction and exhibited poor levels of inter-rater reliability. One factor noted by coders that made this code difficult to apply is that timestamps were not included in the chat transcripts. This reduced the context coders had in deciding to apply this code. One possible solution would be to include timestamps in future analyses. Another is to separate out the two parts of this code, using one code to indicate when librarians check on patron progress and a second code to indicate when librarians update patrons on their own progress. However, this code was also problematic in our 2010 analysis and, at that point, solely indicated when librarians updated patrons on their own progress.

 

These latter two codes, referral_mode and maintain_contact, should be improved upon in the future. We intend to work further with student coders to re-examine our scope notes and training examples.

 

Finally, we intend to have a principal investigator code a larger portion of the transcripts in future analyses in order to more accurately gauge inter-rater reliability. In our small sample size, we found that reliability was easily skewed with our current practice of coding 10% of transcripts for comparison.

 

Conclusions

 

The 2013 analysis again focused on evaluating the frequency of best practices in providing chat reference services. Librarian behaviors have improved, likely in response to improved training and awareness as a result of the 2010 analysis. However, there is still room for improvement, specifically regarding librarians providing their name and the name of their library, providing instruction in conjunction with searching for patrons, and inviting patrons to come back to use the service again.

 

Additionally, the investigators have improved upon the analysis process and have identified further areas for improvement including coder training and coding schema design. The methods outlined in this report may serve as an example to other units interested in conducting qualitative analysis in the future.

 

Finally, we plan to conduct a further analysis in the future based on the initial_question code, as outlined in the discussion section of this report. This will identify difficulties that most commonly prompt patrons to initiate chat interactions. We also plan to further investigate the correlation between codes related to subject- and library-specific questions and referrals.

 

References

 

Banerjee, M., Capozzoli, M., McSweeney, L., & Sinha, D. (1999). Beyond kappa: A review of interrater agreement measures. The Canadian Journal of Statistics/La Revue Canadienne de Statistique, 27(1), 3–23. http://doi.org/10.2307/3315487

 

Kwon, N., & Gregory, V. L. (2007). The effects of librarians’ behavioral performance on user satisfaction in chat reference services. Reference & User Services Quarterly, 47(2), 137–148. Retrieved from http://www.jstor.org/stable/20864841

 

Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33(1), 159–174. http://doi.org/10.2307/2529310

 

Reference Assessment Working Group. (2010). Fall 2010 qualitative analysis of chat transcripts. Unpublished manuscript, University of Wisconsin, Madison.

 

Reference and User Services Association. (2004). Guidelines for behavioral performance of reference and information service providers. Retrieved from http://www.ala.org/rusa/resources/guidelines/guidelinesbehavioral

 

 

Appendix A

Coding Schema

Name

Memo

Changes from 2010

Notes/Comments

comeback_ again

Scope: librarian

 

Use: Times when the librarian invites the patron to return.

 

Examples:

If you have any further questions, please let us know.

 

 

Correlates with RUSA guideline 5. Follow-up. Influences patron satisfaction (Kwon & Gregory, 2007)

explicit_ compliment

Scope: patron

 

Use: When the patron provides a compliment to the service after they have received a response from the librarian. This goes beyond the normal politeness that may occur during transactions.

 

Example:

You rock!

Great service!

 

Changed name (was compliment)

Tracked for marketing

 

 

 

greeting

Scope: librarian

 

Use: When a librarian greets the guest at the start of a chat interaction.

 

Examples:

Hi

Hello

How can I help?

 

Correlates with RUSA guideline 1. Approachability

 

 

 

initial_ question

Scope: patron

 

Use: Mark the patron’s initial question that prompted the chat interaction.

 

Examples:

I’m having trouble finding this journal article

 

What are your hours today?

 

New for 2013

For later coding, looking for pain points

 

 

 

instruction

scope: librarian

 

Use: When the librarian gives the guest information on how to do a task. If more than one direction is given in sequence, highlight the entire sequence and count it as a single instance. This even includes if the sequence is contained in more than one line or response. This includes when the librarian supplies a video or screenshot for a guest or indicates that they are walking the guest through the steps of searching while simultaneously searching with the guest. In this case, also use searching_for_patron.

 

Examples:

Librarian: Click on the FindIt button.

 

Librarian: I'm going to the database tab and search for Academic Search.

 

Guest: Let me go where you are.

Librarian: Once you are there click on the database name and then search for: clowns and noses

Guest: Great I'm there.

Librarian: Do you see the 3rd article down

Guest: Yes!

 

Combines instruction, searching_with_patron, and url_jing

Correlates with RUSA guideline 4. Searching. Influences patron satisfaction (Kwon & Gregory, 2007)

library_ specific

Scope: patron

 

Use: When a question asked by a patron requires specific knowledge likely better answered by a subject specialist or a specific library. These will be highly technical questions or involve specialized literature types or software (e.g., laboratory protocols, patents, standards).

 

Examples:

Do you have ASCME standard 1234?

 

Someone is making too much noise on the second floor of Steenbock.

 

I have to find articles related to marketing data for these new widgets.

 

 

 

listening _and_ questioning

Scope: librarian

 

Use: Librarian checks on whether they have sufficiently helped the patron, asks clarifying questions, or rephrases the question or request and asks for confirmation to ensure that it is understood.

 

Examples:

Did this answer your question?

What type of information do you need (books, articles, etc.)?

So you are looking for articles on the gestation period of Tibetan yaks?

 

Combines check_on _success, clarifying_or _closed _question, open_ended _questions, and rephrasing

Correlates with RUSA guideline 3. Listening/inquiring. Influences patron satisfaction (Kwon & Gregory, 2007)

maintain _contact

Scope: Librarian

 

Use: When the librarian leaves for a time and then returns acknowledging that they are working on the question or are back and when the librarian indicates to the guest that they are still working on a question or thinking about the question. This may also be used when librarian checks in with the patron’s progress. This differs from listening_and_questioning, which is used when the librarian is trying to clarify the patron’s needs.

 

Examples:

I'm back.

I'm still working on it.

 

I'll be back in a second.

How are you doing?

 

Combines focus_on _patron and maintain _contact

 

name _librarian

Scope: librarian

 

Use: When librarian gives their name. Usually this will be indicated in the chat as [name omitted]

 

Examples:

 

Hi this is [name omitted] at [library omitted] library

 

 

 

name _library

Scope: librarian

 

Use: When librarian gives the name of their library. Usually this will be indicated in the chat as [library omitted]

 

Examples:

 

Hi this is [name omitted] at [library omitted] library

 

 

 

problem

 

Scope: Applies to entire transaction

 

Use: If the transaction ended abruptly, indicating technical difficulties. Tag the last word in the document.

 

OR

 

scope: patron

 

Use: When the patron asks an inappropriate question or makes a crude or rude remark.

 

Examples:

Will you go out with me later?

 

Combines abrupt and inappropriate

 

referral _mode

Scope: librarian

 

Use: When the librarian refers the patron to another mode of communication in order to better serve them.

 

Examples:

I think that you should come into the library where we can better serve you.

 

It would be better if you call us at xxx-xxxx.

 

I can reply by email more easily.

 

 

Correlates with RUSA guideline 5. Follow-up

 

 

referral _services

Scope: librarian

 

Use: When the librarian refers the guest to another service point in order to better serve them. Don’t use if the patron directly asks about a particular library. In that case, use searching_for_patron.

 

Examples:

I think that you will better if you contact the Business library directly.

 

Wendt Library will be able to better help. Here is their contact information.

 

Please call the Circulation Office at XXX-XXXX.

 

ILL is on chat, I will transfer you to them now.

 

 

Correlates with RUSA guideline 5. Follow-up

searching _for_patron

Scope: librarian

 

Use: Librarian gives the answer to the patron or indicates they are searching for them. This may be used in conjunction with instruction if instruction is given before or afterwards. Also use with instruction if patron indicates they are following along. 

 

Examples:

Hang on. Let me check on that.

 

I found this: http://someurl.com.

 

Combines searching_for_patron and url_other

Correlates with RUSA guideline 4. Searching. Influences patron satisfaction (Kwon & Gregory, 2007)