Conference Paper
LibQUAL+® and the Information Commons Initiative at
Buffalo State College: 2003 to 2009
Eugene J. Harvey
Assessment Librarian
SUNY Buffalo State
Buffalo, New York, United
States of America
Email: harveyej@buffalostate.edu
Maureen Lindstrom
Associate Director for
Information Commons
SUNY Buffalo State
Buffalo, New York, United
States of America
Email: lindstma@buffalostate.edu
2013 Harvey and Lindstrom. This is an
Open Access article distributed under the terms of the Creative Commons‐Attribution‐Noncommercial‐Share Alike License 2.5 Canada (http://creativecommons.org/licenses/by-nc-sa/2.5/ca/),
which permits unrestricted use, distribution, and reproduction in any medium,
provided the original work is properly attributed, not used for commercial
purposes, and, if transformed, the resulting work is redistributed under the
same or similar license to this one.
Abstract
Objective – To examine the effect of a transition to an
information commons model of service organization on perceptions of library
service quality. In 2003, the E. H. Butler Library at Buffalo State College
began development of an Information Commons, which included moving the
computing help desk to the library, reorganizing the physical units in the
library around functional service areas, and moving the reference desk to the
lobby.
Methods
– In
2003, 2006, and 2009, the library administered the LibQUAL+ survey, which
measures the relationship between perceived library service delivery and
library user satisfaction. The 2003 survey was conducted before the
implementation of the Information Commons Initiative. Analyses of variance were
conducted to compare the effect of the service changes on users’ perceptions of
library service quality between the three data collection points, as well as to
explore differences between undergraduate and graduate students.
Results
– The
analyses revealed significant differences between the three data points, with
significantly more positive perceptions of library service quality in 2006 and
2009 than in 2003. Comparisons between 2006 and 2009 were not statistically
significant. In 2003, no significant differences were found between
undergraduate and graduate students’ perceptions. However, in 2006, undergraduate
students perceived higher levels of service quality after the development of
the Information Commons than graduate students. This difference was maintained
in 2009.
Conclusion
–
The Information Commons has become a popular place for
new programming, exhibits, workshops, and cultural events on campus. The
library staff and administration have regained the respect of the campus
community, as well as an appreciation for user-driven input and feedback and
for ongoing assessment and evaluation.
Introduction
Across numerous types of service businesses and organizations, of which
libraries are a part, evaluation and measurement of service methodologies and
outcomes has become a common, multifaceted necessity. The era of accountability
has arrived, and libraries are no longer viewed simply as separate entities
providing “inputs” into larger systems. Rather, library systems naturally are part of these systems, and they must be
defined and evaluated accordingly, including their processes, outputs, and
outcomes in relation to larger systemic structures.
Library measurement
and evaluation evolved significantly throughout the 20th century and
especially into the 21st century. Several key contributors, as individuals
and as members of larger library associations, enriched the field of library
measurement and evaluation, and their contributions will be discussed briefly
to provide a chronological context to undergird a portion of the literature
review, particularly as it relates to the selection of the LibQUAL+® survey
instrument. More importantly, though, their contributions led to the
recognition and acceptance of the need for library evaluation, which helped
spur attempts to strengthen library evaluation research. One such attempt
stemmed from an initiative from the Association of Research Libraries (ARL): a
pilot project designed to examine and assess service quality among academic and
research libraries. This project led to the development of LibQUAL+®, a
psychometric survey instrument designed to measure the relationship between
perceived library service delivery and library user satisfaction. Successive
attempts to strengthen and expand the research base in this field continue
today.
Throughout the past
six years, LibQUAL+® played a special evaluative role at Butler Library at
Buffalo State College. In 2003, the E. H. Butler Library engaged in an
extensive physical and virtual reorganization of service provision and
delivery. Specifically, the library initiated the development of and transition
to an Information Commons model of service organization. Prior to this transition, however, Butler
Library collected LibQUAL+® data from its user groups for two primary reasons:
1) to establish a baseline (i.e., pre-test) for measurement of changes to
users’ perception of library service quality over time, and 2) to receive
concrete feedback from its constituencies to help guide the direction of
development of the Information Commons. After completion of the Information
Commons, LibQUAL+® surveys were administered again in 2006 and 2009 for
purposes of benchmarking against peers, self-benchmarking, and post-testing
user perceptions of service changes.
This paper will
present the evaluative, practical findings related to Butler Library’s journey
of developing an Information Commons. A literature review will be presented,
which will cover: 1) a brief acknowledgement of key contributors to the field
of library evaluation research, and 2) an overview of LibQUAL+®. Further
literature about the Information Commons model will be touched upon in the
methodology section of this paper. The purposes of this research are simple: 1)
to provide other academic libraries with a documentation of our successes and
challenges in developing an Information Commons; 2) to illustrate changes in
users’ perceptions of library services between 2003, 2006, and 2009; and 3) to
contribute to the bodies of practice-based library research and service
evaluation, particularly in relation to Information Commons case studies and
LibQUAL+® research.
Literature Review
Library Evaluation
Most fields
respectfully acknowledge the early works of their key contributors, and the
field of library evaluation should be no exception. Three prominent individuals
wove a common thread in this field throughout the past century: James Thayer
Gerould, a library administrator; F. Wilfrid Lancaster, a library educator, and
Duane Webster, a library association executive (Kyrillidou & Cook, 2008).
The efforts and contributions of these individuals highlight the evolution of
library evaluation practices, and each brought different perspectives into the
assessment and measurement of library services. Their endeavors serve as the
foundation for how future research would supplement their practices and
findings and further improve upon library service evaluation models and
methodologies.
SERVQUAL: The Origins
of LibQUAL+®
ARL reports of descriptive statistics fill a critical need in
evaluative library research, even today. Decades of statistics pinpoint
practices of collection investment, (in)stability of library funding, and
declines and improvements in resource allocation. Trends in these areas can be
monitored, and initiatives can be instituted when deemed important or necessary
to the ARL membership. However, these trends and practices make an assumption
that has yet to be proven empirically: the relationship between expenditures
and service quality (Cook, Heath, Thompson, & Thompson, 2001) - “A measure
of library quality based solely on collections has become obsolete” (Nitecki,
1996, p. 181).
Recognizing the lack of instruments that directly measure service
quality from the user point of view, ARL approved a membership-centered pilot
project in 1999 to respond to college and university administration demands
nationwide for accountability (Cook, Thompson, Heath, & Thompson, 2001).
Part of ARL’s New Measures Program, this project represented a paradigm shift
away from descriptive, collection-input driven measures toward service
evaluation, user satisfaction, and formalized, standardized measurement
initiatives grounded in scientific methodology. These efforts promoted the need
to rely less on the ARL Index (ARL Statistics) as the primary, most
important assessment tool; rather, this project represented a collective,
collaborative effort of many ARL-member libraries and librarians to adopt a new
way of conceptualizing and conducting library evaluation.
To begin the collaborative efforts, ARL accepted the adoption of Texas
A&M University’s research in SERVQUAL (SERVice QUALity), a psychometric
survey instrument that addressed user assessments of service delivery (Cook,
Heath, Thompson, & Thompson, 2001). Although it is beyond the scope of this
paper to address SERVQUAL in-depth, one important point should be noted. The
SERVQUAL instrument was designed in the 1980s to assess service quality in the
for-profit business world (Cook, Heath, Thompson, & Thompson, 2001). Thus,
in order to utilize and incorporate this research into the field of library
evaluation, ARL requested the instrument be re-conceptualized, re-designed and
re-tested to better address service delivery to users of libraries. The new
instrument would need to be tailored to library users, rightly presumed to be a
distinctly different population than traditional “business customers.” Also,
the instrument needed to be grounded in college and university library settings
and environments; after all, libraries typically are non-profit entities
focusing more on service provision (as compared to for-profit settings,
possibly focusing on resource provision or production). Nevertheless, SERVQUAL
represented a promising survey model, a foundation from which a more
library-oriented survey could be developed.
LibQUAL+®: An Overview
In general terms, LibQUAL+® is a
22-core-item “total market” survey instrument designed to assess library
service quality of an academic library from the point of view of the library
user (Thompson, Kyrillidou, & Cook, 2008). Factor analytic studies and item
analyses reveal that LibQUAL+®
measures the single overarching dimension of perceived library service
satisfaction and quality (Thompson, Cook, & Heath, 2001). However, this
should not be confused with its three subscales: Affect of Service, Information
Control, and Library as Place. These three “dimensions” measure components of
library service satisfaction:
Affect of Service
This aspect of user
satisfaction examines the
helpfulness and responsiveness of library employees to users. Early LibQUAL+® research indicates
three components to this subscale dimension (Cook, Heath, Thompson, & Thompson, 2001). Assurance is
“the knowledge and courtesy of employees and their ability to convey confidence
and trust” (Cook, Thompson, Heath, & Thompson, 2001, p. 265). Empathy
includes the caring, compassionate, individualized attention of employees
toward their users. Responsiveness is
the ability and willingness to provide efficient service to its users.
Information Control
This aspect of user
satisfaction examines the
availability, timeliness and appropriateness of library resources. Components
of this subscale dimension include user perceptions of the comprehensiveness of collections, barrier-free access to information at the time of need, and information formats (e.g., print,
digital, etc.) (Cook, Heath, Thompson, & Thompson, 2001).
Library as Place
The final subscale measurement examines
how well physical library facilities serve users’ needs for space and
technology. This concept assesses the ability to meet needs for community socialization, utilitarian space (e.g., for study,
collaboration, etc.), and space for creative
and scholarly inquiry and rumination (Cook, Heath,
Thompson, & Thompson, 2001).
Although
validity issues will be discussed later, it is important to note two potential
shortcomings of these subscale areas. First, Library as Place is a continuously changing phenomenon, especially
as technology demands force a shift from print-based resources to digital and
web-based resources. Loudly and clearly, users have expressed an overwhelming
need for resources to be available anytime, anywhere, from any location (Thompson et al.,
2008). This
demand has fostered technological changes in the ways in which resources are
accessed, particularly from remote locations using computing and web-based
technologies. Thus, Library as Place
is becoming less “physical.” As more resources become available as online
digital full-text, the “dependency” on a library’s physical space for
information resources becomes lessened. In fact, it may become possible in the
not-so-near future for users to complete library research activities entirely
in an online, digital environment. If this becomes the case, this aspect of
user satisfaction may shift dramatically, if not be eliminated altogether.
Secondly,
and on a similar note, information formats are shifting toward digital,
electronic versions. However, one particular item in the Information Control subscale inquires about “the printed library
materials I need for my work” (Thompson et al., 2008, p. 14).
Again, this item may become less relevant given the shift toward digital
formats. If the question is asked, it may “plant the seed” in the mind of the
survey respondent that printed materials should
be a part of a library’s collection. If a library shifts to a digital-based
collection (which Butler Library has done—90% of journals are digital), then
the respondent may perceive the library is deficient in this area. Consequently,
this item could threaten the validity of LibQUAL+® data. This is why it is
important for LibQUAL+® researchers to monitor these trends and make necessary
item modifications or deletions accordingly (e.g., delete the word “printed”).
LibQUAL+®: Psychometric Properties and Integrity
In 2007,
LibQUAL+® collected data from the one-millionth library user and the
one-thousandth institution; and since its conception in the early 2000s,
surveys have been administered to library users in 20 countries in 12 different
languages (Thompson et al., 2008). The sheer number of data collected is massive and
expansive, lending to a richly diverse longitudinal collection of statistical
information. What started out as a need for stronger evaluative measures in North
American academic libraries has expanded to a global scale, a truly remarkable representation of
libraries both nationally and internationally.
Validity
Some
LibQUAL+® studies have engaged in rigorous statistical testing to determine
criterion-related validity (Thompson, Cook, & Kyrillidou, 2005;
Heath, Cook, Kyrillidou, & Thompson, 2002). However, since LibQUAL+® was a unique
instrument, convergent validity, or
statistical comparisons between instruments measuring the same or similar
concepts, could not possibly be tested (Shadish, Cook, & Campbell, 2002).
Instead, Heath et al. investigated LibQUAL+®’s concurrent validity, or the distinct ability to distinguish
concepts from one another in order to measure each concept separately, as
compared to the ARL Index, a predominantly collection-and-expenditure-based
reporting instrument (Heath et al., 2002; Shadish et al., 2002). As expected,
the “strongest” correlation between LibQUAL+® and the ARL Index involved Information Access (r 2 = .147 = 2.2%), and this correlation was small. The
reason the two instruments did not correlate presumably is due to each
instrument measuring distinctly different concepts—LibQUAL+® measures user
satisfaction, and the ARL Index measures collection holdings and expenditures. Thus,
in a fascinating way, this study showed and strengthened LibQUAL+® ’s validity
by disproving its correlation with a conceptually different measure.
One other
potential threat to validity is self-selection
bias. LibQUAL+® surveys rely on the voluntary completion of the survey by
respondents. Due to confidentiality, a library would not be able to access
personally identifiable information (such as email addresses) for the purposes
of conducting research using random-sampling methods. Instead, libraries market
the survey to its users utilizing whatever means available to them. Libraries
rely on these marketing efforts to “attract” users (and non-users) to
participate in the typically Web-driven survey. Self-selection is not a random
sampling method and, thus, carries with it the potential flaws of such a
bias—the most general concerns being: “do respondents differ from
non-respondents?” For example, a user who is greatly satisfied with library
services may be more than willing to complete a survey “to help the library.”
Alternatively, a user who is greatly dissatisfied may be more likely, too, to
complete a survey to voice their concerns. However, what about users who are
“in the middle”—maybe only somewhat satisfied? Are they more, less, or just as
likely to participate in this survey? Also, what about the likelihood of non-users to complete the survey? Are
library non-users just as likely to complete the LibQUAL+® survey (or not
complete it) than library users? These questions and concerns inherently could
impact the validity of any research findings, including those of LibQUAL+®.
Reliability
A plethora
of research studies have examined the stability of LibQUAL+® ’s reliability, including
longitudinal analyses, and most reliability correlation coefficients reach at
least .85, .90, or even higher (Thompson, Cook, & Thompson, 2002; Thompson
& Cook, 2002; Cook, Heath, Thompson, & Thompson, 2001; Cook & Thompson, 2001). Although it is beyond
the scope of this paper to cover all reliability studies in depth, the research
of Thompson, Cook, and Thompson (2002) is most indicative of LibQUAL+® ’s
reliability. Their research reported a
Cronbach’s alpha coefficient of .948, a remarkably high internal reliability
indicator.
Item Response Scoring—The “Gap Measurement” Model
Given its roots in attitude measurement, LibQUAL+® utilizes a gap-measurement
model for item response scoring. For each survey item, respondents provide
three different ratings; these ratings include:
Gap measurement relies on the perceived scores of respondents as
indicators of service quality (Thompson et al., 2000). Specifically, the
difference between perceived levels of service and minimum and desired levels
of service is calculated to determine positive and negative scores. If levels
of perceived service are greater than or equal to minimum levels of services,
users typically are “tolerant” and accepting of the library’s service in that
area. If it falls below that minimum, however, then the user believes the
library is not performing up to their minimal expectations in that area, which
typically results in dissatisfaction. Similarly, if perceived service meets or
exceeds their desired level of service, then typically a user is “satisfied.”
Anything below desired levels of service may be an indication of
dissatisfaction. However, LibQUAL+® posits that service
quality may still be acceptable as long as the library meets users’ perceived
minimal levels of service, even if they are not functioning at the desired
level. This “gap” indicates a threshold known as the zone of tolerance. Ideally, libraries should attempt to meet
users’ desired levels of service, but, even if they meet their minimal levels
of service, libraries generally will be met with at least somewhat satisfied
users.
Gap measurement carries its own set of pros and cons. One positive
outcome of gap measurement is an inherent “lie detection” and random response
scale. “Logically . . . a user’s rating of desired performance should never be
below . . . minimally acceptable performance [ratings]” (Thompson et al., 2000,
p. 168). If so, especially if persistent throughout a respondent’s cumulative
scores, it likely is an indication of random response (and, thus, a threat to
score validity). Consequently, such aberrances are determined through simple
counting, and once aberrances for an individual survey reach a predetermined
threshold, that survey is deemed invalid and subsequently is deleted from data
inclusion.
Another positive outcome also happens to be related to multiple
ratings. Gap measurement carries an “intuitive” appeal, a “complex simplicity,”
if you will. Assuming a respondent understands the nature of the rating methods
and how they are related to one another, a respondent can provide very
important, powerfully reliable data (Thompson et al., 2000).
One con of gap measurement involves the user directly. Instead of
responding to one 22-item Likert-type scale, the gap measurement model “forces”
users to complete three Likert-type scales, one for each perceived service
rating. This results in, minimally, a user completing over 60 responses. This
reality may have been beyond their expectation and, consequently, may result in
mid-completion respondent attrition, which typically is another threat to
validity.
Similarly, another con involves the user’s comprehension of an item’s
concepts and/or constructs. For example, a respondent reaches the item:
“Library space that inspires study and learning.” If they do not understand the
concept “library space” (or if it is not applicable to them, such as only
accessing the library through remote digital access), they may be confused as
to how to answer. Then when they attempt to provide a score for each
rating, the chances of computing imperfect scores are compounded (Thompson et
al., 2000). Interpretation problems magnify inaccuracies when multiple ratings
for one item are involved.
The Information Commons Initiative at Buffalo State College
Historical Background
2003 was the year of the
perfect storm of bad news for Butler Library. As was the case in hundreds of
academic libraries across the country, 2003 was a year of an unprecedented
decrease in gate counts, reference desk statistics, and library material
circulation. At Butler Library it also was the year of an unprecedented
increase in technology-related questions and complaints: usernames did not
work, e-mail accounts needed to be activated, passwords needed to be reset,
printers were jammed, work was not saved, discs were lost, and software could
not be loaded. Students with these types of problems had such a confusing time
resolving them that the process was given a name—“The BuffState Shuffle.” In
2003 users’ frustration levels were high on all fronts, and staff morale seemed
to be at an all-time low. Library administrators were scrambling to justify
filling vacant lines for functions that appeared to be in decline. As Scott
Carlson noted in his 2001 article in the Chronicle
of Higher Education, “Gate counts and circulation
of traditional materials are falling at many college libraries across the
country, as students find new study spaces in dorm rooms or apartments, coffee
shops, or nearby bookstores” (p. A35). New technologies, increased automation,
and of course the Web, improved access to information and empowered users. It
also kept users away from the library. The silence was deafening . . . but only
for a while. We needed to find a way to get our users back.
Our first formal step
was to confirm what we suspected: users were staying away because they were
unsatisfied with the library on many fronts. Hence, in 2003, we administered
the LibQUAL+® survey to formally measure library patron satisfaction and,
according to the data received, library user groups perceived Butler Library as
falling short in all three dimensions/service areas. Scores for overall
satisfaction, affect of service, information control, and library as place
ranged from the 40th to the 42nd percentiles. (Baseline
percentiles were determined through comparisons against 2003 LibQUAL+® norms.)
William M. Sullivan,
senior scholar at the Carnegie Foundation for the Advancement of Teaching,
stated, “Thinking of a library as an information center is the first step toward
losing it” (Carlson, 2001, p. A35). What really was the library then if not an
information center? The disappointing results of LibQUAL+® served as a wake-up
call for Butler Library to redefine itself. What resulted was the creation of
the Information Commons and, seven years later, a library that had reclaimed
its place as the academic and cultural heart of the Buffalo State College
campus.
College & Library
Overview
Buffalo State College,
a Carnegie Master’s-L level institution, is the largest four-year urban college
in the State University of New York (SUNY) system. Enrollment for fall 2009 was
11,714 students: 9822 undergraduate and 1892 graduate students. Five schools,
the School of Arts and Humanities, the School of Education, the School of Natural
and Social Sciences, the School of the Professions, and the Graduate School,
offer 162 undergraduate programs with 11 honors options and 60 graduate
programs including 17 post-baccalaureate teacher certification programs.
First-year undeclared students are enrolled in University College, which
provides support programs and specific opportunities to foster student success.
The top five majors at the college are business, elementary education &
reading, technology, criminal justice, and history.
Butler Library is a
medium-sized academic library which houses more than 675,000 printed books,
over 174,000 electronic books, and access to full-text articles from over
57,000 unique print and electronic journals. The library is open 110 hours each
week during regular semesters and within our building we have two
extended-hours facilities, StudyQuad and QuietQuad, which are open and staffed
24/7 during regular semesters. Butler Library is the largest open computer lab
on the campus, housing more than 200 computers, which provide full access to
library resources, the Web, the Microsoft Office Suite, and various specialized
software applications. Access to the wireless network and secure networked
printing is also available in the library. The library has a café and several
lounge areas. Security cameras are installed for safety and the building is
routinely patrolled by University Police Student Assistants.
The Beginning of a
Developmental Plan
Credit must be given
to the seminal article by Donald Beagle, Conceptualizing
an Information Commons, for giving librarians at Butler Library a vision
for the future. (1999) Librarians by nature tend to be excellent organizers,
visionaries, and adept at seeing the bigger picture. The road to revitalization
of the library required a new way of defining the library’s purpose and its
responsibility to provide support to the greater academic community. The
Information Commons concept defined by Donald Beagle provided an excellent
framework. Of particular interest were Beagle’s new descriptions for use of
library space and his redefinitions of library services. Butler Library’s front
line staff could clearly articulate many instances of poor or confusing service
on campus. If we could consolidate the provision of essential services within
the library itself, students would be better served by a “one-stop shop.” The
plan was for that one-stop shop to become an Information Commons.
Implementation:
Building an Information Commons
The look and feel of
the Butler Library of seven years ago is but a distant memory—so much has
changed. Below is a summary of the major highlights of the library’s
reorganization:
The Computing Help Desk moves into the library
A review of the
literature on restructuring academic libraries is full of information and case
studies about the marriage of computing services and library services. In
Butler Library this was the most obvious service to include in the Information
Commons. This move allowed for support to be available at the point of
need—most students discover they need password resets or specialized computer
assistance when they using library computers. Having the Computing Help Desk in
the library also raised user satisfaction levels as this service was physically
more accessible and visible. The help desk staff instantly became supportive
partners, fully participating in technology and customer service planning
within the Information Commons.
Continuous Assessment/Continuous Improvement
(CA/CI)
Two librarians
participated in a year-long CA/CI training workshop during which public service
areas were evaluated and a structure for change was developed. Continuous
improvement continues to be the philosophy within the Information Commons.
Use of an outside facilitator
During times of change
staff can become nervous or concerned about their future role in the
organization. The entire library staff needed to come together around an
understanding and vision for the creation of an Information Commons. An outside
facilitator was hired and helped aggregate input to create a newly envisioned
mission statement for the Information Commons.
In our session, the facilitator did an excellent job of rallying the
staff around a common goal. In retrospect, this activity proved to be extremely
productive and worthwhile.
Library reorganization
Physical units in the
library, such as microforms, media services, interlibrary loan, were
re-organized around functional service areas. Librarians had responsibility for
functional areas but were encouraged to develop interdisciplinary partnerships
and scholarship. The Associate Director for Information Commons position was
created to oversee all public areas of the library including the Web site and
online and print resources. An Information Commons supervisor was appointed to
oversee all clerical and student staff. All clerical staff were cross-trained
in all functional service areas.
Perhaps the most
visible change, and the most controversial, was the move of the reference desk
from the back reference room to the library lobby. Librarians initially
disagreed with this move, indicating the potential of compromised privacy and
that the area was too noisy and too visible. However, within a week, reference
desk statistics in all categories increased. Reference librarians were busy
again and librarians’ concerns soon subsided.
Managing expectations
With little
additional, direct fiscal expense, the concept of the Information Commons
seemed to be a risk worth taking. This implementation, in a sense, could even
be considered a trial phase, if necessary—enabling the library to try something
new, yet leaving open the option of returning to the previous structure of
services. Even with some resistance and dissension, expectations remained
cautiously optimistic. However, all agreed that increased visibility and
aligning with user expectations was a positive step in the right direction.
Post-Implementation
Evaluation: The Second Data Collection Point (2006)
The year-long process
of creating an Information Commons was well-grounded and justified by the
disappointing results of the 2003 LibQUAL+® data. In 2006, Butler Library
administered a second collection of LibQUAL+® data. Although detailed results
will be presented later, it is worth noting that users’ perception of overall
library service quality changed significantly in a positive direction. Across
the board, LibQUAL+® scores showed improvement in all three service dimensions.
These results helped justify and confirm the direction of library service
reorganization into the Information Commons model.
The Services
Almost immediately
after the Information Commons was opened and marketed, typical library usage
statistics (e.g., reference desk, gate counts, circulation) indicated the
library was becoming busier, and campus offices and departments seemed to
realize that conducting their business in the library could be more practical,
more efficient and effective, and could reach more students. Hence, the
Information Commons became the site for new services such as:
As a direct result of
the success of the Information Commons, the library received funding to create
and staff this area to provide software and application support and training for
students, faculty, and campus staff. This is the only area on campus that
provides this much-needed service, its value indicated by the over 16,500
questions that were answered by this area in 2009.
Equipment Loan
Students need to borrow
equipment for use in their coursework. Previous to the library taking on this
service, equipment loan was located in a secluded office, which provided
limited hours of service. The library identified space adjacent to the
Application Support and Training Desk, purchased new equipment, created a Web
site to reserve and track this equipment, created video tutorials for proper
use of this equipment, and as a result logged over 3,000 loans that year.
The Bengal ID Card
Office
Along with agreeing to
print ID cards and bus passes for all faculty, staff, and students, the library
has become the site for the administration of all ID card functions, including
dining, vending, and printing.
Professional
Development Center
This new space opened
in September 2010 and is the site for faculty and professional staff
development programming and training. Requests for space in the library
continue to be made, again indicative of the excellent reputation of the
Information Commons.
StudyQuad and
QuietQuad
These areas were
constructed in the library specifically because of student requests for late
night collaborative and quiet study spaces. These areas are open 24/7 during
regular semesters and are extremely popular for those students who have jobs or
cannot study in the dorms.
Methodology
This non-experimental, practice-oriented research study utilized the
well-established LibQUAL+® survey instrument as the primary means of
collecting baseline data in 2003 and for two subsequent tri-annual data
collections (2006 & 2009). After the three-year initiative to develop the
Information Commons, the 2006 data collection, hypothetically, would highlight
positive changes in users’ perceptions of overall service quality as measured
by the LibQUAL+® instrument. Finally, the 2009 data collection
would indicate whether or not users’ satisfaction with the development of the
Information Commons could be sustained or if it simply was the result of a
dramatic short-term effect.
Although LibQUAL+® provides numerous demographic variables
worthy of additional study, additional analyses were narrowed solely to
differences between undergraduate and graduate students. Examination of these
differences happened quite serendipitously, mostly due to one of the
researcher’s statistical background. Such “data mining” techniques typically
are frowned upon in the scholarly community as most sound research is perceived
as deriving from theories or models and the development of research questions
hypotheses before data collection and analysis (i.e., experimental research).
However, for the purposes of practice-oriented library service evaluation,
examination of data from a multitude of facets, dimensions, and variables truly
gives practitioners a greater understanding of their users’ needs. Ultimately,
greater insight into user needs could equate to better provision of library
services. Thus, this data, despite being discovered through happenstance, will
be presented, too.
Participants
Beginning in 2003, Butler Library utilized a cross-sectional sampling plan to
collect LibQUAL+® survey data from its constituents in three-year intervals,
the most recent in 2009. Recruitment of volunteers occurred through three
primary channels: direct outreach (reference desk interactions, classrooms,
student & faculty contacts), marketing (campus newspapers, announcements on
the Web site, bookmarks, departmental and campus emails), and incentives (the
chance to win an iPod). Volunteers were asked to visit the library’s LibQUAL+®
survey page to complete the survey. Only fully completed surveys were used for
data analysis; imputation of missing data was not utilized. With the exception
of undergraduate and graduate student status, most sampling demographic
variables were not as crucial for the purposes of these evaluations. Thus, they
will not be reported in this paper. However, Table 1 illustrates frequencies of
undergraduate and graduate student participation based on year; this
demographic variable was found to be important in some analyses.
Formal
analyses of other demographic differences for each tri-annual data collection
point were never calculated, but demographics in LibQUAL+® reports were
reviewed and, roughly estimating, showed no substantive differences from the
overall Buffalo State College population.
All
participants were from various user groups of Buffalo State College: students,
faculty and staff. Library staff members were excluded from all analyses due to
the potential for biased results (i.e., vested interests). Faculty were
included in analyses related to changes in perceived library service quality
over the development of the Information Commons, but they were excluded from
other analyses relating to undergraduate and graduate student groups.
Testing
Instrument (LibQUAL+®)
Despite methodological flaws inherent to almost any
testing instrument, including LibQUAL+®, library faculty at Buffalo State College selected LibQUAL+® based upon its well-documented psychometric
properties, which were discussed previously in the literature review, and for
its value in collecting the same data over time, longitudinally. Beagle,
Bailey, and Tierney point out the lack of explicit evaluative instruments
focusing specifically on the effectiveness of Information Commons services
(Beagle, Bailey, & Tierney, 2006). Instead, like LibQUAL+®, most evaluative instruments implicitly, or
indirectly, measure said services. Technically, LibQUAL+® measures perceptions of library
service quality, not Information Commons
service quality,
yet Beagle and other scholars tend to accept the administration of LibQUAL+® for such a purpose.
Table
1
Undergraduate and Graduate LibQUAL+® Participation – 2003 to 2009
|
2003 |
2006 |
2009 |
Undergraduate |
266 |
423 |
380 |
Graduate |
50 |
54 |
76 |
Total |
316 |
477 |
456 |
Score Data
Only the mean adequacy gap scores were selected
from LibQUAL+® data for use in most statistical analyses.
These scale scores reflect the difference between a user’s expected minimum level
of service and their perceived level of service. Larger, positive adequacy gap
scores indicate greater satisfaction, while negative scores indicate
dissatisfaction.
Results
A one-way, between-subjects ANOVA was conducted to
compare the effect of the aforementioned service changes on users’ perceptions
of library service quality between three tri-annual data collection points
(2003, 2006, and 2009). The Levene Test of Homogeneity of Variances indicated
equal variance and, thus, supports the usage of ANOVA (F [2, 1598] =
2.62, p > .05). Results of the one-way ANOVA revealed significant
differences between the tri-annual data collection points (F [2, 1598] =
7.07, p = .001). Post-hoc comparisons using Scheffe’s test indicated
significantly more positive perceptions of library service quality for the 2006
data point (M = .32, 95% CI [.09, .55]) and the 2009 data point (M
= .307, 95% CI [.07, .54]) as compared to the 2003 data point. Comparisons between
the 2006 and 2009 data points were not statistically significant at p
< .05.
The impact of these service changes on
undergraduate and graduate student groups’ perceptions of service quality was
explored also using one-way ANOVAs. (Post-hoc comparisons will not be necessary
due to having only two factorial conditions: undergraduate or graduate student
status. Statistically significant differences will be between those two groups
only.) In 2003, results of one-way ANOVA indicated no significant differences
between undergraduate and graduate students and their perceptions of library
service quality (F [1, 314] = .014, p < .05).The Levene Test
of Homogeneity of Variance indicated equal variance and supported the usage of
ANOVA (F [1, 314] = .724, p > .05).
However, in 2006, results of one-way ANOVA
indicated that undergraduate students’ perceived higher levels of service
quality after the development of the Information Commons than graduate students
(F [1, 475] = 5.024, p = .025). Equal variance was indicated
through the Levene Test (F [1, 475] = .553, p > .05). This
difference was maintained in 2009 as well, as shown through one-way ANOVA (F
[1, 454] = 4.013, p = .046) (Levene Test: F [1, 454] = .163, p
> .05).
Discussion
As hypothesized, the development of the Information
Commons between 2003 and 2006 had a significantly positive impact on users’
overall perceptions of service quality, including in each of LibQUAL+®’s three service dimensions. Interestingly,
the Information Commons model would seem to fit more into the “Library as
Place” dimension, yet scores in Affect of Service and Information Control also
improved significantly. Perhaps the physical, virtual, and cultural
“repackaging” of services indirectly affected users’ perceptions of these two areas.
For example, a medical office seen as clean, comfortable, nurturing, etc. may
influence patients’ expectations of the quality and competence of staff
there (i.e., affect of service), whereas a less clean, uncomfortable
environment would result in a different opinion or expectation of staff and
service. A similar effect may have happened with Butler Library patrons. After
revitalizing the environment with the Information Commons model of service
organization and delivery, patrons’ perceptions of library staff and
interactions with them (i.e., Affect of Service) may have improved as an
indirect coincidence. A similar phenomenon may have occurred with the dimension
of Information Control (e.g., perceptions of having better ability to access
and retrieve information).
Besides the inferential statistics applied in this
paper, the scores for all three data sets were compared against LibQUAL+® norms (Cook, Heath, & Thompson, 2002;
Thompson, Cook, & Kyrillidou, 2006). This enabled Butler Library to
benchmark results to that of other libraries as a means of comparison. Also, it
enabled the library to self-benchmark longitudinally over three years utilizing
the same testing instrument. Figure 1 illustrates this data.
This data further supports the findings from the
statistical analysis section. Butler Library showed significant, positive gains
in percentile scores between 2003 and 2006.
Difference in results between 2006 and 2009 were
not statistically significant. Although the percentile for overall perceived service
quality increased slightly, statistical analysis indicates that it could not be
ruled out due to chance. However, one very important point should be noted:
perceived service quality did not decrease. Despite the economic downturn and
subsequent fiscal “crunching” between 2006 and 2009, users’ satisfaction with
service quality did not diminish significantly. The gains resulting from the
development of the Information Commons were maintained, which suggests a
long-term, sustained impact from developing such a model of service delivery.
The Butler Library staff and administration were pleased overall with this
result since it was hoped this model would not be a one-time “shot in the arm”
or a dramatic fad.
Results from 2006-2009 comparisons support sustained,
positive gains.
Figure 1
Butler Library benchmarking & self-benchmarking
from 2003 to 2009
Statistical analyses for undergraduate and graduate
students revealed no differences in their perceptions of service quality prior
to the development of the information commons; without disagreement, it was
apparent they were both equally dissatisfied with library services in 2003.
However, for both the 2006 and 2009 data, analyses revealed that the
development of the Information Commons had more of an impact on undergraduate
students’ perceptions of service quality than graduate students. To help
understand this difference, correlations between all 2009 LibQUAL+® survey items and the overall mean adequacy
gap scores were computed for both the undergraduate and graduate student
groups. For each group, Table 2 illustrates the five LibQUAL+® items that most highly correlate with the
mean adequacy gap score:
The development of an Information Commons best fits with the Library as
Place service dimension. Using Table 2 as a guide, this dimension appears to be
of more value to undergraduate students than graduate students. For
undergraduates, three of the top five items stem from this service dimension.
One explanation is that undergraduate students see the Information Commons
and/or library as a necessity for their learning, study, and research. With a
multitude of information, technological, cultural, and recreational services
and activities, they may view the Information Commons as a place to “get away”
and relax and/or a place to be nurtured when they need assistance.
Library as Place seems to be less relevant to graduate students, as
evidenced in Table 2; only one item stems from this service dimension. Instead,
more of their top items relate to Information Control and Affect of Service.
Many graduate students have families, careers, and other responsibilities
outside of the college environment and, thus, might be less reliant on the
Information Commons to fill the role of a “second home.” Also, since many of
their responsibilities and activities may center more on advanced research than
undergraduates, the Information Control dimension is more important to graduate
students.
Table 2
Top Five LibQUAL+® Items for Undergraduate and Graduate
Students
Service Element |
Service Dimension |
Pearson r |
Undergraduate
students |
||
Employees who are
consistently courteous. |
Affect of Service |
0.756 |
A comfortable and
inviting location. |
Library as Place |
0.755 |
Library space that
inspires study and learning. |
Library as Place |
0.739 |
A getaway for study,
learning, or research. |
Library as Place |
0.724 |
Employees who have the
knowledge to answer user questions. |
Affect of Service |
0.71 |
Graduate Students |
||
A library website
enabling me to locate information on my own. |
Information Control |
0.827 |
Readiness to respond to
user questions. |
Affect of Service |
0.781 |
A getaway for study,
learning, or research. |
Library as Place |
0.779 |
Employees who have the
knowledge to answer questions. |
Affect of Service |
0.776 |
Employees who are
consistently courteous. |
Affect of Service |
0.774 |
The electronic
information resources I need. |
Information Control |
0.769 |
These findings sparked much debate among library faculty and staff, and
they likely will guide future planning and services for the Information Commons.
After all, graduate students are a very important user group too; and the
planning of services must take into account their unique needs and interests,
particularly in relation to their research interests and information requests.
These findings would not have been identified without the LibQUAL+®
data and methods related somewhat to data mining. Certainly this information is
of critical importance and will be addressed in future endeavors.
Conclusion
The Information
Commons has become a popular place for new programming, exhibits, workshops,
and cultural events on campus. One exciting new initiative, which has received
extensive local and national recognition, was the creation of the Rooftop
Poetry Club. Other new initiatives are the implementation of a Digital Commons,
the library green initiative, the software virtualization project, Google Docs
workshops, and the library blog.
Beagle describes three manifestations integral to an Information Commons: the Physical
Commons, the Virtual Commons, and the Cultural Commons (Beagle et al., 2006).
In Butler Library, the physical and virtual had been deliberately and
consciously created. However, it was the cultural component that developed
last, almost organically, and likely a result of our physical and virtual
changes. Beagle lists creative expression, public speech, popular and academic
publishing, and scholarly inquiry as pieces of the cultural commons. Butler
Library’s cultural developments and progressions include examples such as:
·
new programming
·
new exhibits (e.g., a
faculty publications showcase; campus and community art exhibits)
·
workshops (e.g.,
Google Docs; software programs)
·
the implementation of
a Digital Commons for scholarly works and publications
·
the creation of a
Rooftop Poetry Club
·
the library’s Green
Initiative
·
a software
virtualization project
·
the library blog and
newsletter
New partners
The Information
Commons now partners with Student Affairs, Graduate Studies, Orientation, Instructional
Resources, College Relations, Events Management, University College, the
Registrar, and Computing and Technology Services to provide ancillary services
to the campus.
Recognition
Since the creation of the
Information Commons, Butler Library librarians have been awarded a Chancellor’s
Award for Excellence in Librarianship, an Excellence in Library Service Award,
and a Library of the Year Award. Our library director was promoted to Associate
Vice President for Library and Instructional Technology. A new reporting structure, split between the
provost and the chief information officer, reflects the collaborative nature
and common goals of computing and technology services and the library.
Benefits for Students
Seven years ago, a
student coming to the library to complete a homework assignment would need to
log into the library’s computers with her assigned username. If this student
forgot her username, she needed to walk across campus to a different building
to get assistance at the computer help desk. At this desk the student would be
asked to show her ID card. If this student did not have an ID card, she needed
to walk back to the library to the ID card office where she might have to wait
until the next business day to receive her ID. The student would then have to
walk back across campus to the help desk for a username and then finally back
to the library to access the library’s computers and use the library’s
resources.
Seven years ago, there
was no place to go for word processing assistance nor was there any equipment
such as voice recorders, projectors, or laptops available for loan. There was
no place for quiet study during late night hours as the library closed at 11:00
pm. Meal plan services were in another building, the writing center was across
campus, and coming to the library for a sandwich and a quick look at e-mail was
unheard of.
Today every student
has access to all the following services in Butler Library:
The process of
revitalizing E. H. Butler Library through the implementation of an Information Commons
has been an immensely rewarding experience for the entire staff. Not only has the Butler Library staff and
administration regained the respect of the campus community, they also have
regained an invaluable appreciation for user-driven input and feedback and for
ongoing assessment and evaluation, including the well-established,
multidimensional LibQUAL+® instrument. Most importantly, though, the users of
the Information Commons have responded loudly and clearly – they approved of
the changes in service structure, and their satisfaction with the Information
Commons and its service quality has sustained over time.
References
Beagle, D. (1999). Conceptualizing an information commons. The Journal of Academic Librarianship, 25(2), 82-89. doi:10.1016/S0099-1333(99)80003-2
Beagle, D. R., Bailey, D. R., & Tierney,
B. (2006). The information commons
handbook. New York: Neal-Schuman.
Carlson, S. (2001). As students
work online, reading rooms empty out -- leading some campuses to add Starbucks.
The Chronicle of Higher Education, 48(12), A35-A37.
Cook, C., Heath, F., & Thompson, B.
(2002). Score norms for improving library service quality: a LibQUAL+® study. portal: Libraries and the Academy, 2(1), 13-26. doi:10.1353/pla.2002.0007
Cook, C., Heath, F., Thompson, B., &
Thompson, R. (2001). The search for new measures: the ARL LibQUAL+® Project—a
preliminary report. portal: Libraries and
the Academy, 1(1), 103-112. doi:10.1353/pla.2002.0007
Cook, C., & Thompson, B. (2001).
Psychometric properties of scores from the Web-based LibQUAL+® study of
perceptions of library service quality. Library
Trends, 49(4), 585-604.
Cook, C., Thompson, B., Heath, F., &
Thompson, R. (2001). LibQUAL+®: Service quality assessment in research
libraries. IFLA Journal, 27(4), 264-268. Retrieved 20 May 2013 from http://archive.ifla.org/V/iflaj/art2704.pdf
Heath, F., Cook, C., Kyrillidou, M., &
Thompson, B. (2002). ARL Index and other validity correlates of LibQUAL+®
scores. portal: Libraries and the
Academy, 2(1), 27-42. doi:10.1353/pla.2002.0017
Kyrillidou, M., & Cook, C. (2008). The
evolution of measurement and evaluation of libraries: A perspective from the
Association of Research Libraries. Library
Trends, 56(4), 888-909.
Nitecki, D. A. (1996). Changing the concept
and measure of service quality in academic libraries. Journal of Academic Librarianship, 22(3), 181-190. doi:10.1016/S0099-1333(96)90056-7
Shadish, W. R., Cook, T. D., & Campbell,
D. T. (2002). Experimental and
quasi-experimental designs for generalized causal inference. Boston:
Houghton Mifflin.
Thompson, B., & Cook, C. (2002). Stability
of the reliability of LibQUAL+® scores a reliability generalization
meta-analysis study. Educational and
Psychological Measurement, 62(4), 735-743. doi:10.1177/0013164402062004013
Thompson, B., Cook, C., & Heath, F.
(2000). The LibQUAL+® gap measurement model: The bad, the ugly, and the good of
gap measurement. Performance Measurement
and Metrics, 1(3), 165-178. doi:10.1108/EUM0000000007216
Thompson, B., Cook,
C., & Heath, F. (2001). How many dimensions does it take to measure users’
perceptions of libraries?: A LibQUAL+® study. portal: Libraries and the Academy, 1(2),
129-138. doi:10.1353/pla.2001.0030
Thompson, B., Cook, C., & Kyrillidou, M.
(2005). Concurrent validity of LibQUAL+® scores: What do LibQUAL+® scores
measure? Journal of Academic
Librarianship, 31(6), 517-522. doi:10.1016/j.acalib.2005.08.002
Thompson, B., Cook, C., & Kyrillidou, M.
(2006, April). Stability of library
service quality benchmarking norms across time and cohorts: a LibQUAL+® study. Paper presented at the Asia-Pacific
Conference of Library and Information Education and Practice (A-LIEP),
Singapore. Retrieved 20 May 2013 from http://www.coe.tamu.edu/~bthompson/libq2005.htm
Thompson, B., Cook, C., & Thompson, R. L. (2002).
Reliability and structure of LibQUAL+® scores: Measuring perceived library
service quality. portal: Libraries and
the Academy, 2(1), 3-12. doi:
10.1353/pla.2002.0022
Thompson, B., Kyrillidou, M., & Cook, C.
(2008). Library users’ service desires: A LibQUAL+® study. Library Quarterly, 78(1), 1-18.