Microsoft Word - September_ITAL_Betz_final.docx


Self-­‐Archiving	
  with	
  Ease	
  in	
  an	
  	
  
Institutional	
  Repository:	
  	
  
Microinteractions	
  and	
  the	
  User	
  Experience	
  

	
  
Sonya	
  Betz	
  

	
  and	
  	
  
Robyn	
  Hall	
  

	
  

INFORMATION	
  TECHNOLOGY	
  AND	
  LIBRARIES	
  |	
  SEPTEMBER	
  2015	
  	
  

	
   	
   	
   	
  

43	
  

ABSTRACT	
  

Details	
  matter,	
  especially	
  when	
  they	
  can	
  influence	
  whether	
  users	
  engage	
  with	
  a	
  new	
  digital	
  initiative	
  
that	
  relies	
  heavily	
  on	
  their	
  support.	
  During	
  the	
  recent	
  development	
  of	
  MacEwan	
  University’s	
  
institutional	
  repository,	
  the	
  librarians	
  leading	
  the	
  project	
  wanted	
  to	
  ensure	
  the	
  site	
  would	
  offer	
  users	
  
an	
  easy	
  and	
  effective	
  way	
  to	
  deposit	
  their	
  works,	
  in	
  turn	
  helping	
  to	
  ensure	
  the	
  repository’s	
  long-­‐term	
  
viability.	
  The	
  following	
  paper	
  discusses	
  their	
  approach	
  to	
  user-­‐testing,	
  applying	
  Dan	
  Saffer’s	
  
framework	
  of	
  microinteractions	
  to	
  how	
  faculty	
  members	
  experienced	
  the	
  repository’s	
  self-­‐archiving	
  
functionality.	
  It	
  outlines	
  the	
  steps	
  taken	
  to	
  test	
  and	
  refine	
  the	
  self-­‐archiving	
  process,	
  shedding	
  light	
  on	
  
how	
  others	
  may	
  apply	
  the	
  concept	
  of	
  microinteractions	
  to	
  better	
  understand	
  a	
  website’s	
  utility	
  and	
  
the	
  overall	
  user	
  experience	
  that	
  it	
  delivers.	
  	
  

INTRODUCTION	
  

One	
  of	
  the	
  greatest	
  challenges	
  in	
  implementing	
  an	
  institutional	
  repository	
  (IR)	
  at	
  a	
  university	
  is	
  
acquiring	
  faculty	
  buy-­‐in.	
  Support	
  from	
  faculty	
  members	
  is	
  essential	
  to	
  ensuring	
  that	
  repositories	
  
can	
  make	
  online	
  sharing	
  of	
  scholarly	
  materials	
  possible,	
  along	
  with	
  the	
  long-­‐term	
  digital	
  
preservation	
  of	
  these	
  works.	
  Many	
  open	
  access	
  mandates	
  have	
  begun	
  to	
  emerge	
  around	
  the	
  world,	
  
developed	
  by	
  universities,	
  governments,	
  and	
  research	
  funding	
  organizations,	
  which	
  serve	
  to	
  
increase	
  participation	
  through	
  requiring	
  that	
  faculty	
  contribute	
  their	
  works	
  to	
  a	
  repository.1	
  
However,	
  for	
  many	
  staff	
  managing	
  IRs	
  at	
  academic	
  libraries	
  there	
  are	
  no	
  enforceable	
  mandates	
  in	
  
place,	
  and	
  only	
  a	
  fraction	
  of	
  faculty	
  works	
  can	
  be	
  contributed	
  without	
  copyright	
  implications	
  when	
  
author	
  agreements	
  transfer	
  copyrights	
  to	
  publishers.	
  Persuading	
  faculty	
  members	
  to	
  take	
  the	
  time	
  
to	
  sort	
  through	
  their	
  works	
  and	
  self-­‐archive	
  those	
  that	
  are	
  not	
  bound	
  by	
  rights	
  restrictions	
  is	
  a	
  
challenge.	
  

Standard	
  installations	
  of	
  popular	
  IR	
  software,	
  including	
  DSpace,	
  Digital	
  Commons,	
  and	
  ePrints,	
  do	
  
little	
  to	
  help	
  facilitate	
  easy	
  and	
  efficient	
  IR	
  deposits	
  by	
  faculty.	
  As	
  Dorothea	
  Salo	
  writes	
  in	
  a	
  widely	
  
cited	
  critique	
  of	
  IRs	
  managed	
  by	
  academic	
  libraries,	
  the	
  “‘build	
  it	
  and	
  they	
  will	
  come’	
  proposition	
  
has	
  been	
  decisively	
  wrong.”2	
  A	
  major	
  issue	
  she	
  points	
  out	
  is	
  that	
  repositories	
  were	
  predicated	
  on	
  
the	
  “assumption	
  that	
  faculty	
  would	
  deposit,	
  describe,	
  and	
  manage	
  their	
  own	
  material.”3	
  Seven	
  

	
  

Sonya	
  Betz	
  (sonya.betz@ualberta.ca)	
  is	
  Digital	
  Initiatives	
  Project	
  Librarian,	
  University	
  of	
  Alberta	
  
Libraries,	
  University	
  of	
  Alberta,	
  Edmonton,	
  Alberta.	
  Robyn	
  Hall	
  (HallR27@macewan.ca)	
  is	
  
Scholarly	
  Communications	
  Librarian,	
  MacEwan	
  University	
  Library,	
  MacEwan	
  University,	
  
Edmonton,	
  Alberta.	
  



	
  

SELF-­‐ARCHIVING	
  WITH	
  EASE	
  IN	
  AN	
  INSTITUTIONAL	
  REPOSITORY	
  |	
  BETZ	
  AND	
  HALL	
  	
  
doi:	
  10.6017/ital.v34i3.5900	
  

44	
  

years	
  after	
  the	
  publication	
  of	
  her	
  article,	
  a	
  vast	
  majority	
  of	
  the	
  more	
  than	
  2,600	
  repositories	
  
currently	
  operating	
  around	
  the	
  world	
  still	
  function	
  in	
  this	
  way	
  and	
  struggle	
  to	
  attract	
  widespread	
  
faculty	
  support.4	
  To	
  deposit	
  works	
  into	
  these	
  systems,	
  faculty	
  are	
  often	
  required	
  to	
  fill	
  out	
  an	
  
online	
  form	
  to	
  describe	
  and	
  upload	
  each	
  work	
  individually.	
  This	
  can	
  be	
  a	
  laborious	
  process	
  that	
  
includes	
  deciphering	
  lengthy	
  copyright	
  agreements,	
  filling	
  out	
  an	
  array	
  of	
  metadata	
  fields,	
  and	
  
ensuring	
  file	
  formats	
  or	
  file	
  sizes	
  that	
  are	
  compatible	
  with	
  the	
  constraints	
  of	
  the	
  software.	
  

In	
  August	
  of	
  2014,	
  MacEwan	
  University	
  Library	
  in	
  Edmonton,	
  Alberta,	
  launched	
  an	
  IR,	
  Research	
  
Online	
  at	
  MacEwan	
  (RO@M;	
  http://roam.macewan.ca).	
  Our	
  hope	
  was	
  that	
  RO@M’s	
  simple	
  user	
  
interface	
  and	
  straightforward	
  submission	
  process	
  would	
  help	
  to	
  bolster	
  faculty	
  contributions.	
  The	
  
site	
  was	
  built	
  using	
  Islandora,	
  an	
  open-­‐source	
  software	
  framework	
  that	
  offered	
  the	
  project	
  
developers	
  substantial	
  flexibility	
  in	
  appearance	
  and	
  functionality.	
  In	
  an	
  effort	
  to	
  balance	
  their	
  
desire	
  for	
  independence	
  over	
  their	
  work	
  with	
  ease	
  of	
  use,	
  faculty	
  and	
  staff	
  have	
  the	
  option	
  of	
  
submitting	
  to	
  RO@M	
  in	
  one	
  of	
  two	
  ways:	
  they	
  can	
  choose	
  to	
  complete	
  a	
  brief	
  process	
  to	
  create	
  
basic	
  metadata	
  and	
  upload	
  their	
  work,	
  or	
  they	
  can	
  simply	
  upload	
  their	
  work	
  and	
  have	
  RO@M	
  staff	
  
create	
  metadata	
  and	
  complete	
  the	
  deposit.	
  	
  

Thoroughly	
  testing	
  both	
  of	
  these	
  processes	
  was	
  critical	
  to	
  the	
  success	
  of	
  the	
  IR.	
  We	
  wanted	
  to	
  
ensure	
  that	
  there	
  were	
  no	
  obstacles	
  in	
  the	
  design	
  that	
  would	
  dissuade	
  faculty	
  members	
  from	
  
contributing	
  their	
  works	
  once	
  they	
  had	
  made	
  the	
  decision	
  to	
  start	
  the	
  contribution	
  process.	
  As	
  the	
  
primary	
  means	
  of	
  adding	
  content	
  to	
  the	
  IR,	
  and	
  as	
  a	
  process	
  that	
  other	
  institutions	
  have	
  found	
  
problematic,	
  carefully	
  designing	
  each	
  step	
  of	
  how	
  a	
  faculty	
  contributor	
  submits	
  material	
  was	
  our	
  
highest	
  priority.	
  To	
  help	
  us	
  focus	
  our	
  testing	
  on	
  some	
  of	
  these	
  important	
  details,	
  and	
  to	
  provide	
  a	
  
framework	
  of	
  understanding	
  for	
  refining	
  our	
  design,	
  we	
  turned	
  to	
  Dan	
  Saffer’s	
  2013	
  book	
  
Microinteractions:	
  Designing	
  with	
  Details.	
  The	
  following	
  case	
  study	
  describes	
  our	
  use	
  of	
  
microinteractions	
  as	
  a	
  user-­‐testing	
  approach	
  for	
  libraries	
  and	
  discusses	
  what	
  we	
  learned	
  as	
  a	
  
result.	
  We	
  seek	
  to	
  shed	
  light	
  on	
  how	
  other	
  repository	
  managers	
  might	
  envision	
  and	
  structure	
  their	
  
own	
  self-­‐archiving	
  processes	
  to	
  ensure	
  buy-­‐in	
  while	
  still	
  relying	
  on	
  faculty	
  members	
  to	
  do	
  some	
  of	
  
the	
  necessary	
  legwork.	
  Additionally,	
  we	
  lay	
  out	
  how	
  other	
  digital	
  initiatives	
  may	
  embrace	
  the	
  
concept	
  of	
  microinteractions	
  as	
  a	
  means	
  of	
  better	
  understanding	
  the	
  relationship	
  between	
  the	
  
utility	
  of	
  a	
  website	
  and	
  the	
  true	
  value	
  of	
  positive	
  user	
  experience.	
  	
  

LITERATURE	
  REVIEW	
  

User	
  Experience	
  and	
  Self-­‐Archiving	
  in	
  Institutional	
  Repositories	
  

User	
  experience	
  (UX)	
  in	
  libraries	
  has	
  gained	
  significant	
  traction	
  in	
  recent	
  years	
  and	
  provides	
  a	
  
useful	
  framework	
  for	
  exploring	
  how	
  our	
  users	
  are	
  interacting	
  with,	
  and	
  finding	
  meaning	
  in,	
  the	
  
library	
  technologies	
  we	
  create	
  and	
  support.	
  Although	
  there	
  is	
  still	
  some	
  disagreement	
  around	
  the	
  
definition	
  and	
  scope	
  of	
  what	
  exactly	
  we	
  mean	
  when	
  we	
  talk	
  about	
  UX,	
  there	
  seems	
  to	
  be	
  general	
  
consensus	
  that	
  paying	
  attention	
  to	
  UX	
  shifts	
  focus	
  from	
  the	
  usability	
  of	
  a	
  product	
  to	
  more	
  
nonutilitarian	
  qualities,	
  such	
  as	
  meaning,	
  affect,	
  and	
  value.5	
  Hassenzhal	
  simply	
  defines	
  UX	
  as	
  a	
  



	
  

INFORMATION	
  TECHNOLOGIES	
  AND	
  LIBRARIES	
  |	
  SEPTEMBER	
  2015	
   45	
  

“momentary,	
  primarily	
  evaluative	
  feeling	
  (good-­‐bad)	
  while	
  interacting	
  with	
  a	
  product	
  or	
  service.”6	
  
Hassenzhal,	
  Diefenbach,	
  and	
  Goritz	
  argue	
  that	
  positive	
  emotional	
  experiences	
  with	
  technology	
  
occur	
  when	
  the	
  interaction	
  fulfills	
  certain	
  psychological	
  needs,	
  such	
  as	
  competence	
  or	
  popularity.7	
  
The	
  2010	
  ISO	
  standard	
  for	
  human-­‐centered	
  design	
  for	
  interactive	
  systems	
  defines	
  UX	
  even	
  more	
  
broadly,	
  suggesting	
  that	
  it	
  “includes	
  all	
  the	
  users’	
  emotions,	
  beliefs,	
  preferences,	
  perceptions,	
  
physical	
  and	
  psychological	
  responses,	
  behaviors	
  and	
  accomplishments	
  that	
  occur	
  before,	
  during	
  
and	
  after	
  use.”8	
  However,	
  when	
  creating	
  tools	
  for	
  library	
  environments,	
  it	
  can	
  be	
  difficult	
  for	
  
practitioners	
  to	
  translate	
  ambiguous	
  emotional	
  requirements,	
  such	
  as	
  satisfying	
  emotional	
  and	
  
psychological	
  needs	
  or	
  increasing	
  motivation,	
  with	
  pragmatic	
  outcomes,	
  such	
  as	
  developing	
  a	
  
piece	
  of	
  functionality	
  or	
  designing	
  a	
  user	
  interface.	
  

It	
  has	
  been	
  well	
  documented	
  that	
  repository	
  managers	
  struggle	
  to	
  motivate	
  academics	
  to	
  self-­‐
archive	
  their	
  works.9	
  However,	
  the	
  literature	
  focusing	
  on	
  how	
  IR	
  websites’	
  self-­‐archiving	
  
functionality	
  helps	
  or	
  hinders	
  faculty	
  support	
  and	
  engagement	
  is	
  sparse.	
  One	
  study	
  of	
  note	
  was	
  
conducted	
  by	
  Kim	
  and	
  Kim	
  in	
  2006,	
  who	
  led	
  usability	
  testing	
  and	
  focus	
  groups	
  on	
  an	
  IR	
  in	
  South	
  
Korea.	
  10	
  They	
  provide	
  a	
  number	
  of	
  ways	
  to	
  improve	
  usability	
  on	
  the	
  basis	
  of	
  their	
  findings,	
  which	
  
include	
  avoiding	
  jargon	
  terms	
  and	
  providing	
  comprehensive	
  instructions	
  at	
  points	
  of	
  need	
  rather	
  
than	
  burying	
  them	
  in	
  submenus.	
  Similarly,	
  Veiga	
  e	
  Silva,	
  Goncalves,	
  and	
  Laender	
  reported	
  results	
  
of	
  usability	
  testing	
  conducted	
  on	
  the	
  Brazilian	
  Digital	
  Library	
  of	
  Computing,	
  which	
  confirmed	
  their	
  
initial	
  goals	
  of	
  building	
  a	
  self-­‐archiving	
  service	
  that	
  was	
  easily	
  learned,	
  comfortable,	
  and	
  
efficient.11	
  The	
  authors	
  of	
  both	
  of	
  these	
  studies	
  suggest	
  that	
  user-­‐friendly	
  design	
  could	
  help	
  to	
  
ensure	
  the	
  active	
  support	
  and	
  sustainability	
  of	
  their	
  services,	
  but	
  long-­‐term	
  use	
  remained	
  to	
  be	
  
seen	
  at	
  the	
  time	
  of	
  publication.	
  Meanwhile,	
  Bell	
  and	
  Sarr	
  recommend	
  integrating	
  value-­‐added	
  
features	
  into	
  IR	
  websites	
  as	
  a	
  way	
  to	
  attract	
  faculty.12	
  Their	
  successful	
  strategy	
  for	
  reengineering	
  a	
  
struggling	
  IR	
  at	
  the	
  University	
  of	
  Rochester	
  included	
  adding	
  tools	
  to	
  allow	
  users	
  to	
  edit	
  metadata	
  
and	
  add	
  and	
  remove	
  files,	
  and	
  providing	
  portfolio	
  pages	
  where	
  faculty	
  could	
  list	
  their	
  works	
  in	
  the	
  
IR,	
  link	
  to	
  works	
  available	
  elsewhere,	
  detail	
  their	
  research	
  interests,	
  and	
  upload	
  a	
  copy	
  of	
  their	
  CV.	
  
Although	
  the	
  question	
  remains	
  as	
  to	
  whether	
  a	
  positive	
  user	
  experience	
  in	
  an	
  IR	
  can	
  be	
  a	
  
significant	
  motivating	
  factor	
  for	
  increasing	
  faculty	
  participation,	
  there	
  seems	
  to	
  be	
  enough	
  
evidence	
  to	
  support	
  its	
  viability	
  as	
  an	
  approach.	
  

Applying	
  Microinteractions	
  to	
  User	
  Testing	
  

Dan	
  Saffer’s	
  2013	
  book,	
  Microinteractions:	
  Designing	
  with	
  Details,	
  follows	
  logically	
  from	
  the	
  UX	
  
movement.	
  Although	
  he	
  uses	
  the	
  phrase	
  “user	
  experience”	
  sparingly,	
  Saffer	
  consistently	
  connects	
  
interactive	
  technologies	
  with	
  the	
  emotional	
  and	
  psychological	
  mindset	
  of	
  the	
  user.	
  Saffer	
  focuses	
  
on	
  “microinteractions,”	
  which	
  he	
  defines	
  as	
  “a	
  contained	
  product	
  moment	
  that	
  revolves	
  around	
  a	
  
single	
  use	
  case.”13	
  Saffer	
  argues	
  that	
  well-­‐designed	
  microinteractions	
  are	
  “the	
  difference	
  between	
  
a	
  product	
  you	
  love	
  and	
  product	
  you	
  tolerate.”14	
  Saffer’s	
  framework	
  is	
  an	
  effective	
  application	
  of	
  UX	
  
theory	
  to	
  a	
  pragmatic	
  task.	
  Not	
  only	
  does	
  he	
  privilege	
  the	
  emotional	
  state	
  of	
  the	
  user	
  as	
  a	
  priority	
  



	
  

SELF-­‐ARCHIVING	
  WITH	
  EASE	
  IN	
  AN	
  INSTITUTIONAL	
  REPOSITORY	
  |	
  BETZ	
  AND	
  HALL	
  	
  
doi:	
  10.6017/ital.v34i3.5900	
  

46	
  

for	
  design,	
  he	
  also	
  provides	
  concrete	
  recommendations	
  for	
  designing	
  technology	
  that	
  provokes	
  
positive	
  psychological	
  states	
  such	
  as	
  pleasure,	
  engagement,	
  and	
  fun.	
  

Defining	
  what	
  we	
  mean	
  by	
  a	
  “microinteraction”	
  is	
  important	
  when	
  translating	
  Saffer’s	
  theory	
  to	
  a	
  
library	
  environment.	
  He	
  describes	
  a	
  microinteraction	
  as	
  “a	
  tiny	
  piece	
  of	
  functionality	
  that	
  only	
  
does	
  one	
  thing	
  .	
  .	
  .	
  every	
  time	
  you	
  change	
  a	
  setting,	
  sync	
  your	
  data	
  or	
  devices,	
  set	
  an	
  alarm,	
  pick	
  a	
  
password,	
  turn	
  on	
  an	
  appliance,	
  log	
  in,	
  set	
  a	
  status	
  message,	
  or	
  favorite	
  or	
  Like	
  something,	
  you	
  are	
  
engaging	
  with	
  a	
  microinteraction.”15	
  In	
  libraries,	
  many	
  microinteractions	
  are	
  built	
  around	
  
common	
  user	
  tasks	
  such	
  as	
  booking	
  a	
  group-­‐use	
  room,	
  placing	
  a	
  hold	
  on	
  an	
  item,	
  registering	
  for	
  an	
  
event,	
  rating	
  a	
  book,	
  or	
  conducting	
  a	
  search	
  in	
  a	
  discovery	
  tool.	
  A	
  single	
  piece	
  of	
  interactive	
  library	
  
technology	
  may	
  have	
  any	
  number	
  of	
  discrete	
  microinteractions,	
  and	
  often	
  are	
  part	
  of	
  a	
  larger	
  
ecosystem	
  of	
  connected	
  processes.	
  For	
  example,	
  an	
  integrated	
  library	
  system	
  is	
  composed	
  of	
  
hundreds	
  of	
  microinteractions	
  designed	
  both	
  for	
  end	
  users	
  and	
  library	
  staff,	
  while	
  a	
  self-­‐checkout	
  
machine	
  is	
  primarily	
  designed	
  to	
  facilitate	
  a	
  single	
  microinteraction.	
  

Saffer’s	
  framework	
  provided	
  a	
  valuable	
  new	
  lens	
  on	
  how	
  we	
  could	
  interpret	
  users’	
  interactions	
  
with	
  our	
  IR.	
  While	
  we	
  generally	
  conceptualize	
  an	
  IR	
  as	
  a	
  searchable	
  collection	
  of	
  institutional	
  
content,	
  we	
  can	
  also	
  understand	
  it	
  as	
  a	
  collection	
  of	
  microinteractions.	
  For	
  example,	
  RO@M’s	
  core	
  
is	
  microinteractions	
  that	
  enable	
  tasks	
  such	
  as	
  searching	
  content,	
  browsing	
  content,	
  viewing	
  and	
  
downloading	
  content,	
  logging	
  in,	
  submitting	
  content,	
  and	
  contacting	
  staff.	
  RO@M	
  also	
  includes	
  
microinteractions	
  for	
  staff	
  to	
  upload,	
  review,	
  and	
  edit	
  content.	
  As	
  discussed	
  above,	
  one	
  of	
  the	
  
primary	
  goals	
  when	
  developing	
  our	
  IR	
  was	
  to	
  allow	
  faculty	
  to	
  deposit	
  scholarly	
  content,	
  such	
  as	
  
articles	
  and	
  conference	
  papers,	
  directly	
  to	
  the	
  repository.	
  We	
  wanted	
  this	
  process	
  to	
  be	
  simple	
  and	
  
intuitive,	
  and	
  for	
  faculty	
  to	
  have	
  some	
  control	
  over	
  the	
  assignation	
  of	
  keywords	
  and	
  other	
  
metadata,	
  but	
  also	
  to	
  have	
  the	
  option	
  to	
  simply	
  submit	
  content	
  with	
  minimal	
  effort.	
  We	
  decided	
  to	
  
employ	
  user	
  testing	
  to	
  carefully	
  examine	
  the	
  deposit	
  process	
  as	
  a	
  discrete	
  microinteraction	
  and	
  to	
  
apply	
  Saffer’s	
  framework	
  as	
  a	
  means	
  of	
  assessing	
  both	
  functionality	
  and	
  UX.	
  We	
  hoped	
  that	
  
focusing	
  on	
  the	
  details	
  of	
  that	
  particular	
  microinteraction	
  would	
  allow	
  us	
  to	
  make	
  careful	
  and	
  
thoughtful	
  design	
  choices	
  that	
  would	
  lead	
  to	
  a	
  more	
  consistent	
  and	
  pleasurable	
  UX.	
  

METHOD	
  AND	
  CASE	
  STUDY	
  

We	
  conducted	
  two	
  rounds	
  of	
  user	
  testing	
  for	
  the	
  self-­‐archiving	
  process.	
  Our	
  initial	
  user	
  testing	
  
was	
  conducted	
  in	
  January	
  2014.	
  We	
  asked	
  seven	
  faculty	
  to	
  review	
  and	
  comment	
  on	
  a	
  mockup	
  of	
  
the	
  deposit	
  form	
  to	
  test	
  the	
  workflow.	
  This	
  simple	
  exercise	
  allowed	
  us	
  to	
  confirm	
  the	
  steps	
  in	
  the	
  
upload	
  process,	
  and	
  identified	
  a	
  few	
  critical	
  issues	
  that	
  we	
  could	
  resolve	
  before	
  building	
  out	
  the	
  IR	
  
in	
  Islandora.	
  After	
  completing	
  the	
  development	
  of	
  the	
  IR,	
  and	
  with	
  a	
  working	
  copy	
  of	
  the	
  site	
  
installed	
  on	
  our	
  user	
  acceptance	
  testing	
  (UAT)	
  server,	
  we	
  conducted	
  a	
  second	
  round	
  of	
  in-­‐depth	
  
usability	
  testing	
  within	
  our	
  new	
  microinteraction	
  framework.	
  	
  

In	
  April	
  2014	
  we	
  recruited	
  six	
  faculty	
  members	
  through	
  word	
  of	
  mouth	
  and	
  through	
  a	
  call	
  for	
  
participants	
  in	
  the	
  university’s	
  weekly	
  electronic	
  staff	
  newsletter.	
  The	
  volunteers	
  represented	
  
major	
  disciplines	
  at	
  MacEwan	
  University,	
  including	
  health	
  sciences,	
  social	
  sciences,	
  humanities,	
  



	
  

INFORMATION	
  TECHNOLOGIES	
  AND	
  LIBRARIES	
  |	
  SEPTEMBER	
  2015	
   47	
  

and	
  natural	
  sciences.	
  Saffer	
  describes	
  a	
  process	
  for	
  testing	
  microinteractions	
  and	
  suggests	
  that	
  the	
  
most	
  relevant	
  way	
  to	
  test	
  microinteractions	
  is	
  to	
  include	
  “hundreds	
  (if	
  not	
  thousands)	
  of	
  
participants.”16	
  However,	
  he	
  goes	
  on	
  to	
  describe	
  the	
  most	
  effective	
  methods	
  of	
  testing	
  to	
  be	
  
qualitative,	
  including	
  conversation,	
  interviews,	
  and	
  observation.	
  Testing	
  thousands	
  of	
  participants	
  
with	
  one-­‐on-­‐one	
  interviews	
  and	
  observation	
  sessions	
  is	
  well	
  beyond	
  the	
  means	
  of	
  most	
  academic	
  
libraries,	
  and	
  runs	
  counter	
  to	
  standard	
  usability	
  testing	
  methodology.	
  While	
  testing	
  only	
  six	
  
participants	
  may	
  seem	
  like	
  a	
  small	
  number,	
  and	
  one	
  that	
  is	
  apt	
  to	
  render	
  inconclusive	
  results	
  and	
  
sparse	
  feedback,	
  it	
  is	
  strongly	
  supported	
  by	
  usability	
  experts,	
  such	
  as	
  Jakob	
  Nielson.	
  During	
  the	
  
course	
  of	
  our	
  testing,	
  we	
  quickly	
  reached	
  what	
  Nielson	
  refers	
  to	
  in	
  his	
  piece	
  “How	
  Many	
  Test	
  Users	
  
in	
  a	
  Usability	
  Study?”	
  as	
  “the	
  point	
  of	
  diminishing	
  returns.”17	
  He	
  suggests	
  that	
  for	
  most	
  qualitative	
  
studies	
  aimed	
  at	
  gathering	
  insights	
  to	
  inform	
  site	
  design	
  and	
  overall	
  UX,	
  five	
  users	
  is	
  in	
  fact	
  a	
  
suitable	
  number	
  of	
  participants.	
  We	
  support	
  his	
  recommendation	
  on	
  the	
  basis	
  of	
  our	
  own	
  
experiences;	
  by	
  the	
  fourth	
  participant,	
  we	
  were	
  receiving	
  very	
  repetitive	
  feedback	
  on	
  what	
  
worked	
  well	
  and	
  what	
  needed	
  to	
  be	
  changed.	
  

Testing	
  took	
  place	
  in	
  faculty	
  members’	
  offices	
  on	
  their	
  own	
  personal	
  computers	
  so	
  that	
  they	
  would	
  
have	
  the	
  opportunity	
  to	
  engage	
  with	
  the	
  site	
  as	
  they	
  would	
  under	
  normal	
  workday	
  circumstances.	
  
Each	
  user	
  testing	
  session	
  lasted	
  45	
  to	
  60	
  minutes,	
  and	
  was	
  facilitated	
  by	
  three	
  members	
  of	
  the	
  
RO@M	
  team:	
  the	
  web	
  and	
  UX	
  librarian	
  guided	
  each	
  faculty	
  member	
  through	
  the	
  testing	
  process,	
  
the	
  scholarly	
  communications	
  librarian	
  observed	
  the	
  interaction,	
  and	
  a	
  library	
  technician	
  took	
  
detailed	
  notes	
  recording	
  participant	
  comments	
  and	
  actions.	
  Each	
  faculty	
  member	
  was	
  given	
  an	
  
article	
  and	
  asked	
  to	
  contribute	
  that	
  article	
  to	
  RO@M	
  using	
  the	
  UAT	
  site.	
  The	
  RO@M	
  team	
  observed	
  
the	
  entire	
  process	
  carefully,	
  especially	
  noting	
  any	
  problematic	
  interactions,	
  while	
  encouraging	
  the	
  
faculty	
  member	
  to	
  think	
  aloud.	
  Once	
  testing	
  was	
  complete,	
  the	
  scholarly	
  communications	
  librarian	
  
analyzed	
  the	
  notes	
  and	
  identified	
  areas	
  of	
  common	
  concern	
  and	
  confusion	
  among	
  participants,	
  as	
  
well	
  as	
  several	
  suggestions	
  that	
  the	
  participants	
  made	
  to	
  improve	
  the	
  site’s	
  functionality	
  as	
  they	
  
worked	
  through	
  the	
  process.	
  She	
  then	
  went	
  about	
  making	
  changes	
  to	
  the	
  site	
  based	
  on	
  this	
  
feedback.	
  As	
  we	
  discuss	
  in	
  the	
  next	
  section,	
  each	
  task	
  that	
  faculty	
  members	
  performed,	
  from	
  easy	
  
to	
  frustrating,	
  represented	
  an	
  interaction	
  with	
  the	
  user	
  interface	
  that	
  affected	
  participants’	
  
experiences	
  of	
  engaging	
  with	
  the	
  contribution	
  process,	
  and	
  informed	
  changes	
  we	
  were	
  able	
  to	
  
make	
  before	
  launching	
  the	
  IR	
  service	
  three	
  months	
  later.	
  	
  

Basic	
  Elements	
  of	
  Microinteractions	
  

Saffer’s	
  theory	
  describes	
  four	
  primary	
  components	
  of	
  a	
  microinteraction:	
  the	
  trigger,	
  rules,	
  
feedback,	
  and	
  loops	
  and	
  modes.	
  Viewing	
  the	
  IR	
  upload	
  tool	
  as	
  a	
  microinteraction	
  intended	
  to	
  be	
  
efficient	
  and	
  user-­‐friendly	
  required	
  us	
  to	
  first	
  identify	
  each	
  of	
  these	
  different	
  components	
  as	
  they	
  
applied	
  to	
  the	
  contribution	
  process	
  (see	
  figure	
  1),	
  and	
  then	
  evaluate	
  the	
  tool	
  as	
  a	
  whole	
  through	
  
our	
  user	
  testing.	
  



	
  

SELF-­‐ARCHIVING	
  WITH	
  EASE	
  IN	
  AN	
  INSTITUTIONAL	
  REPOSITORY	
  |	
  BETZ	
  AND	
  HALL	
  	
  
doi:	
  10.6017/ital.v34i3.5900	
  

48	
  

	
  

Figure	
  1.	
  IR	
  Self-­‐Archiving	
  Process	
  with	
  Microinteraction	
  Components.	
  

Trigger	
  

The	
  first	
  component	
  to	
  examine	
  in	
  a	
  microinteraction	
  is	
  the	
  trigger,	
  which	
  is,	
  quite	
  simply,	
  
“whatever	
  initiates	
  the	
  microinteraction.”18	
  On	
  an	
  iPhone,	
  a	
  trigger	
  for	
  an	
  application	
  might	
  be	
  the	
  
icon	
  that	
  launches	
  an	
  app;	
  on	
  a	
  dishwasher,	
  the	
  trigger	
  would	
  be	
  the	
  button	
  pressed	
  to	
  start	
  the	
  
machine;	
  on	
  a	
  website,	
  a	
  trigger	
  could	
  be	
  a	
  login	
  button	
  or	
  a	
  menu	
  item.	
  Well-­‐designed	
  triggers	
  
follow	
  good	
  usability	
  principles:	
  they	
  appear	
  when	
  and	
  where	
  the	
  user	
  needs	
  them,	
  they	
  initiate	
  
the	
  same	
  action	
  every	
  time,	
  and	
  they	
  act	
  predictably	
  (for	
  example,	
  buttons	
  are	
  pushable,	
  toggles	
  
slide).	
  



	
  

INFORMATION	
  TECHNOLOGIES	
  AND	
  LIBRARIES	
  |	
  SEPTEMBER	
  2015	
   49	
  

Examining	
  our	
  trigger	
  was	
  a	
  first	
  step	
  in	
  assessing	
  how	
  well	
  our	
  upload	
  microinteraction	
  was	
  
designed.	
  Uploading	
  and	
  adding	
  content	
  is	
  a	
  primary	
  function	
  of	
  the	
  IR,	
  and	
  the	
  trigger	
  needed	
  to	
  
be	
  highly	
  noticeable.	
  We	
  can	
  assume	
  that	
  users	
  would	
  be	
  goal-­‐based	
  in	
  their	
  approach	
  to	
  the	
  IR;	
  
faculty	
  would	
  be	
  visiting	
  the	
  site	
  with	
  the	
  specific	
  purpose	
  of	
  uploading	
  content	
  and	
  would	
  be	
  
actively	
  looking	
  for	
  a	
  trigger	
  to	
  begin	
  an	
  interaction	
  that	
  would	
  allow	
  them	
  to	
  do	
  so.	
  	
  

The	
  initial	
  design	
  of	
  RO@M	
  included	
  a	
  top-­‐level	
  menu	
  item	
  as	
  the	
  only	
  trigger	
  for	
  contributing	
  
works.	
  In	
  the	
  persistent	
  navigation	
  at	
  the	
  top	
  of	
  the	
  site,	
  users	
  could	
  click	
  on	
  the	
  menu	
  item	
  
labeled	
  “Contribute”	
  where	
  they	
  would	
  then	
  be	
  presented	
  with	
  a	
  login	
  screen	
  to	
  begin	
  the	
  
contribution	
  process.	
  This	
  was	
  immediately	
  obvious	
  to	
  half	
  of	
  the	
  participants	
  during	
  user	
  testing.	
  
However,	
  the	
  other	
  half	
  immediately	
  clicked	
  on	
  the	
  word	
  “Share,”	
  which	
  appeared	
  on	
  the	
  lower	
  
half	
  of	
  the	
  page	
  beside	
  a	
  small	
  icon	
  simply	
  as	
  a	
  way	
  to	
  add	
  some	
  aesthetic	
  appeal	
  to	
  the	
  homepage	
  
along	
  with	
  the	
  words	
  “Discover”	
  and	
  “Preserve.”	
  Not	
  surprisingly,	
  the	
  users	
  were	
  interpreting	
  the	
  
word	
  and	
  icon	
  as	
  a	
  trigger.	
  Because	
  of	
  the	
  user	
  behavior	
  that	
  we	
  observed,	
  we	
  decided	
  to	
  add	
  
hyperlinks	
  to	
  all	
  three	
  of	
  these	
  words,	
  with	
  “Share”	
  linking	
  to	
  the	
  contribution	
  login	
  screen	
  (see	
  
figure	
  2),	
  “Discover”	
  leading	
  to	
  a	
  Browse	
  page,	
  and	
  “Preserve”	
  linking	
  to	
  an	
  FAQ	
  for	
  Authors	
  page	
  
that	
  included	
  information	
  on	
  digital	
  preservation.	
  This	
  increased	
  visibility	
  of	
  the	
  trigger	
  
significantly	
  for	
  the	
  microinteraction.	
  

	
  

Figure	
  2.	
  “Share”	
  as	
  Additional	
  Trigger	
  for	
  Contributing	
  Works.	
  



	
  

SELF-­‐ARCHIVING	
  WITH	
  EASE	
  IN	
  AN	
  INSTITUTIONAL	
  REPOSITORY	
  |	
  BETZ	
  AND	
  HALL	
  	
  
doi:	
  10.6017/ital.v34i3.5900	
  

50	
  

Rules	
  

The	
  second	
  component	
  of	
  microinteractions	
  described	
  by	
  Saffer	
  are	
  the	
  rules.	
  Rules	
  are	
  the	
  
parameters	
  that	
  govern	
  a	
  microinteraction;	
  they	
  provide	
  a	
  framework	
  of	
  understanding	
  to	
  help	
  
users	
  succeed	
  at	
  completing	
  the	
  goal	
  of	
  a	
  microinteraction	
  by	
  defining	
  “what	
  can	
  and	
  cannot	
  be	
  
done,	
  and	
  in	
  what	
  order.”19	
  While	
  users	
  don’t	
  need	
  to	
  understand	
  the	
  engineering	
  behind	
  a	
  library	
  
self-­‐checkout	
  machine,	
  for	
  example,	
  they	
  do	
  need	
  to	
  understand	
  what	
  they	
  can	
  and	
  cannot	
  do	
  
when	
  they’re	
  using	
  the	
  machine.	
  The	
  hardware	
  and	
  software	
  of	
  a	
  self-­‐checkout	
  machine	
  is	
  
designed	
  to	
  support	
  the	
  rules	
  by	
  encouraging	
  users	
  to	
  scan	
  their	
  cards	
  to	
  start	
  the	
  machine,	
  to	
  
align	
  their	
  books	
  or	
  videos	
  so	
  that	
  they	
  can	
  be	
  scanned	
  and	
  desensitized,	
  and	
  to	
  indicate	
  when	
  
they	
  have	
  completed	
  the	
  interaction.	
  

The	
  goal	
  when	
  designing	
  a	
  self-­‐archiving	
  process	
  in	
  RO@M	
  was	
  to	
  ensure	
  that	
  the	
  rules	
  were	
  easy	
  
for	
  users	
  to	
  understand,	
  followed	
  a	
  logical	
  structure,	
  and	
  were	
  not	
  overly	
  complex.	
  To	
  this	
  end,	
  we	
  
drew	
  on	
  Saffer’s	
  approach	
  to	
  designing	
  rules	
  for	
  microinteractions,	
  along	
  with	
  the	
  philosophy	
  
espoused	
  by	
  Steve	
  Krug	
  in	
  his	
  influential	
  web	
  design	
  book,	
  Don’t	
  Make	
  Me	
  Think:	
  A	
  Common	
  Sense	
  
Approach	
  to	
  Web	
  Usability.20	
  Both	
  Krug	
  and	
  Saffer	
  argue	
  for	
  reducing	
  complexity	
  and	
  removing	
  
decision-­‐making	
  from	
  the	
  user	
  whenever	
  possible	
  to	
  remove	
  potential	
  for	
  user	
  error	
  or	
  mistakes.	
  
The	
  rules	
  in	
  RO@M	
  follow	
  a	
  familiar	
  form-­‐based	
  approach:	
  users	
  log	
  in	
  to	
  the	
  system,	
  they	
  have	
  to	
  
agree	
  to	
  a	
  licensing	
  agreement,	
  they	
  create	
  some	
  metadata	
  for	
  their	
  item,	
  and	
  they	
  upload	
  a	
  file	
  
(see	
  figure	
  1).	
  However,	
  determining	
  the	
  order	
  for	
  each	
  of	
  these	
  elements,	
  and	
  ensuring	
  that	
  users	
  
could	
  understand	
  how	
  to	
  fill	
  out	
  the	
  form	
  successfully,	
  required	
  careful	
  thinking	
  that	
  was	
  greatly	
  
informed	
  by	
  the	
  user	
  testing	
  we	
  conducted.	
  

For	
  example,	
  we	
  designed	
  RO@M	
  to	
  connect	
  to	
  the	
  same	
  authentication	
  system	
  used	
  for	
  other	
  
university	
  applications,	
  ensuring	
  that	
  faculty	
  could	
  log	
  in	
  with	
  the	
  credentials	
  they	
  use	
  daily	
  for	
  
institutional	
  email	
  and	
  network	
  access.	
  Forcing	
  faculty	
  to	
  create,	
  and	
  remember,	
  a	
  unique	
  
username	
  and	
  password	
  to	
  submit	
  content	
  would	
  have	
  increased	
  the	
  possibility	
  of	
  login	
  errors	
  
and	
  resulted	
  in	
  confusion	
  and	
  frustration.	
  We	
  also	
  used	
  drop-­‐down	
  options	
  where	
  possible	
  
throughout	
  the	
  microinteraction	
  instead	
  of	
  requiring	
  faculty	
  to	
  input	
  data	
  such	
  as	
  file	
  types,	
  
faculty	
  or	
  department	
  names,	
  or	
  content	
  types	
  into	
  free-­‐text	
  boxes.	
  

During	
  our	
  user	
  testing	
  we	
  found	
  that	
  those	
  fields	
  where	
  we	
  had	
  free-­‐text	
  input	
  for	
  metadata	
  
entry	
  most	
  often	
  led	
  to	
  confusion	
  and	
  errors.	
  For	
  instance,	
  it	
  quickly	
  became	
  apparent	
  that	
  name	
  
authority	
  would	
  be	
  an	
  issue.	
  When	
  filling	
  out	
  the	
  “Author”	
  field,	
  some	
  people	
  used	
  initials,	
  some	
  
used	
  middle	
  names,	
  and	
  some	
  added	
  “Dr”	
  before	
  their	
  name,	
  which	
  could	
  negatively	
  affect	
  the	
  IR’s	
  
search	
  results	
  and	
  the	
  ability	
  to	
  track	
  where	
  and	
  when	
  these	
  works	
  may	
  be	
  cited	
  by	
  others.	
  When	
  
asked	
  to	
  include	
  a	
  citation	
  for	
  published	
  works,	
  most	
  of	
  our	
  participants	
  noted	
  frustration	
  with	
  
this	
  requirement	
  because	
  they	
  could	
  not	
  do	
  so	
  quickly,	
  and	
  they	
  had	
  concerns	
  about	
  creating	
  
correct	
  citations.	
  Finally,	
  many	
  participants	
  also	
  became	
  confused	
  at	
  the	
  last,	
  optional	
  field	
  in	
  the	
  
form	
  that	
  allowed	
  them	
  to	
  assign	
  a	
  creative	
  commons	
  license	
  to	
  their	
  works.	
  



	
  

INFORMATION	
  TECHNOLOGIES	
  AND	
  LIBRARIES	
  |	
  SEPTEMBER	
  2015	
   51	
  

Our	
  user	
  testing	
  indicated	
  that	
  we	
  would	
  need	
  to	
  be	
  mindful	
  of	
  how	
  information	
  like	
  author	
  names	
  
and	
  citations	
  were	
  entered	
  by	
  users	
  before	
  making	
  an	
  item	
  available	
  on	
  the	
  site.	
  Under	
  ideal	
  
circumstances,	
  we	
  would	
  have	
  modified	
  the	
  form	
  to	
  ensure	
  that	
  any	
  information	
  that	
  the	
  system	
  
knew	
  about	
  the	
  user	
  was	
  brought	
  forward:	
  what	
  Saffer	
  calls	
  “don’t	
  start	
  from	
  zero.”21	
  This	
  could	
  
include	
  automatically	
  filling	
  in	
  details	
  like	
  a	
  user’s	
  name.	
  However,	
  like	
  many	
  libraries,	
  we	
  chose	
  to	
  
adapt	
  existing	
  software	
  rather	
  than	
  develop	
  our	
  microinteraction	
  from	
  the	
  ground	
  up;	
  
implementing	
  such	
  changes	
  would	
  have	
  been	
  too	
  time-­‐consuming	
  or	
  expensive.	
  In	
  response,	
  we	
  
instead	
  added	
  additional	
  workflows	
  to	
  allow	
  administrators	
  to	
  edit	
  the	
  metadata	
  before	
  a	
  
contribution	
  would	
  be	
  published	
  to	
  the	
  web	
  so	
  we	
  could	
  correct	
  any	
  errors.	
  We	
  also	
  changed	
  the	
  
“Citation”	
  field	
  to	
  “Publication	
  Information”	
  to	
  imply	
  that	
  users	
  did	
  not	
  need	
  to	
  include	
  a	
  complete	
  
citation.	
  Lastly,	
  we	
  made	
  sure	
  that	
  “All	
  Rights	
  Reserved”	
  was	
  the	
  default	
  selection	
  for	
  the	
  optional	
  
“Add	
  a	
  Creative	
  Commons	
  License?”	
  field	
  in	
  the	
  form	
  because	
  this	
  was	
  language	
  that	
  with	
  which	
  
our	
  users	
  were	
  familiar	
  with	
  and	
  comfortable	
  proceeding.	
  	
  

Policy	
  constraints	
  are	
  another	
  aspect	
  of	
  the	
  rules	
  that	
  provide	
  structure	
  around	
  microinteractions,	
  
and	
  can	
  also	
  limit	
  design	
  choices	
  that	
  can	
  be	
  made.	
  Having	
  faculty	
  complete	
  a	
  nonexclusive	
  
licensing	
  agreement	
  that	
  acknowledged	
  they	
  had	
  the	
  appropriate	
  copyright	
  permissions	
  to	
  allow	
  
them	
  to	
  contribute	
  the	
  work	
  was	
  a	
  required	
  component	
  in	
  our	
  rules.	
  Without	
  the	
  agreement,	
  we	
  
would	
  risk	
  liability	
  for	
  copyright	
  infringement	
  and	
  could	
  not	
  accept	
  the	
  content	
  into	
  the	
  IR.	
  
However,	
  our	
  early	
  designs	
  for	
  the	
  repository	
  included	
  this	
  step	
  at	
  the	
  end	
  of	
  the	
  submission	
  
process,	
  after	
  faculty	
  had	
  created	
  metadata	
  about	
  the	
  item.	
  Our	
  initial	
  round	
  of	
  testing	
  revealed	
  
that	
  several	
  of	
  our	
  participants	
  were	
  unsure	
  of	
  whether	
  they	
  had	
  the	
  appropriate	
  copyright	
  
permissions	
  to	
  add	
  content	
  and	
  didn’t	
  want	
  to	
  complete	
  the	
  submission,	
  a	
  frustrating	
  experience	
  
for	
  them	
  after	
  spending	
  time	
  filling	
  out	
  author	
  information,	
  keywords,	
  abstract,	
  and	
  the	
  like.	
  We	
  
attempted	
  to	
  resolve	
  this	
  issue	
  by	
  moving	
  the	
  agreement	
  much	
  earlier	
  in	
  the	
  process,	
  forcing	
  users	
  
to	
  acknowledge	
  the	
  agreement	
  before	
  creating	
  any	
  metadata.	
  We	
  also	
  used	
  simple,	
  
straightforward	
  language	
  for	
  the	
  agreement	
  and	
  added	
  additional	
  information	
  about	
  how	
  to	
  
determine	
  copyrights	
  or	
  contact	
  RO@M	
  staff	
  for	
  assistance.	
  Integrating	
  an	
  API	
  that	
  could	
  
automatically	
  search	
  a	
  journal’s	
  archiving	
  policies	
  in	
  SHERPA	
  RoMEO	
  at	
  this	
  stage	
  in	
  the	
  
contribution	
  process	
  is	
  something	
  we	
  plan	
  to	
  investigate	
  to	
  help	
  reduce	
  complexity	
  further	
  for	
  
users.	
  	
  

Feedback	
  

Understanding	
  the	
  concept	
  of	
  feedback	
  is	
  critical	
  to	
  the	
  design	
  of	
  microinteractions.	
  While	
  most	
  
libraries	
  are	
  familiar	
  with	
  collecting	
  feedback	
  from	
  users,	
  the	
  feedback	
  Saffer	
  describes	
  is	
  flowing	
  
in	
  the	
  opposite	
  direction,	
  and	
  instead	
  refers	
  to	
  feedback	
  the	
  application	
  or	
  interface	
  is	
  providing	
  
back	
  to	
  users.	
  This	
  feedback	
  gives	
  users	
  information	
  when	
  and	
  where	
  they	
  need	
  it	
  to	
  help	
  them	
  
navigate	
  the	
  microinteraction.	
  As	
  Saffer	
  comments,	
  “the	
  true	
  purpose	
  of	
  feedback	
  is	
  to	
  help	
  users	
  
understand	
  how	
  the	
  rules	
  of	
  the	
  microinteraction	
  work.”22	
  



	
  

SELF-­‐ARCHIVING	
  WITH	
  EASE	
  IN	
  AN	
  INSTITUTIONAL	
  REPOSITORY	
  |	
  BETZ	
  AND	
  HALL	
  	
  
doi:	
  10.6017/ital.v34i3.5900	
  

52	
  

Feedback	
  can	
  be	
  provided	
  in	
  a	
  variety	
  of	
  ways.	
  An	
  action	
  as	
  simple	
  as	
  a	
  color	
  change	
  when	
  a	
  user	
  
hovers	
  over	
  a	
  link	
  is	
  a	
  form	
  of	
  feedback,	
  providing	
  visual	
  information	
  that	
  indicates	
  that	
  a	
  segment	
  
of	
  text	
  can	
  be	
  clicked	
  on.	
  Confirmation	
  messages	
  are	
  an	
  obvious	
  form	
  of	
  feedback,	
  while	
  a	
  folder	
  
with	
  numbers	
  indicating	
  how	
  many	
  items	
  have	
  been	
  added	
  to	
  it	
  is	
  more	
  subtle.	
  While	
  visual	
  
feedback	
  is	
  most	
  commonly	
  used,	
  Saffer	
  also	
  describes	
  cases	
  where	
  auditory	
  and	
  haptic	
  (touch)	
  
feedback	
  may	
  be	
  useful	
  .	
  Designing	
  feedback,	
  much	
  like	
  designing	
  rules,	
  should	
  aim	
  to	
  reduce	
  
complexity	
  and	
  confusion	
  for	
  a	
  user,	
  and	
  should	
  be	
  explicitly	
  connected	
  both	
  functionally	
  and	
  
visually	
  to	
  what	
  the	
  user	
  needs	
  to	
  know.	
  

In	
  an	
  online	
  web	
  environment,	
  much	
  of	
  the	
  feedback	
  we	
  provide	
  the	
  user	
  should	
  be	
  based	
  on	
  good	
  
usability	
  principles.	
  For	
  example,	
  formatting	
  web	
  links	
  consistently	
  and	
  providing	
  predictable	
  
navigation	
  elements	
  are	
  some	
  ways	
  that	
  feedback	
  can	
  be	
  built	
  into	
  a	
  design.	
  Providing	
  feedback	
  at	
  
the	
  users’	
  point	
  of	
  need	
  is	
  also	
  critical,	
  especially	
  error	
  messages	
  or	
  instructional	
  content.	
  This	
  
proved	
  to	
  be	
  especially	
  important	
  to	
  our	
  RO@M	
  test	
  subjects.	
  While	
  the	
  IR	
  featured	
  an	
  “About”	
  
section	
  accessible	
  in	
  the	
  persistent	
  navigation	
  at	
  the	
  top	
  of	
  the	
  website	
  that	
  contained	
  detailed	
  
instructions	
  and	
  information	
  about	
  how	
  to	
  submit	
  works,	
  and	
  terms	
  of	
  use	
  governing	
  these	
  
submissions,	
  this	
  content	
  was	
  virtually	
  invisible	
  to	
  the	
  users	
  we	
  observed.	
  Instead,	
  they	
  relied	
  
heavily	
  on	
  the	
  contextual	
  feedback	
  that	
  was	
  included	
  throughout	
  the	
  contribution	
  process	
  when	
  it	
  
was	
  visible	
  to	
  them.	
  	
  

These	
  observations	
  led	
  us	
  to	
  rethink	
  our	
  approach	
  to	
  providing	
  feedback	
  in	
  several	
  cases.	
  For	
  
example,	
  an	
  unfortunate	
  constraint	
  of	
  our	
  software	
  required	
  users	
  to	
  select	
  a	
  faculty	
  or	
  school	
  and	
  
a	
  department	
  and	
  then	
  click	
  an	
  “Add”	
  button	
  before	
  they	
  could	
  save	
  and	
  continue.	
  We	
  included	
  
some	
  instructions	
  above	
  the	
  drop-­‐down	
  menus,	
  stating	
  “Select	
  and	
  click	
  Add”	
  in	
  an	
  effort	
  to	
  
prevent	
  any	
  errors.	
  However,	
  our	
  participants	
  failed	
  to	
  notice	
  the	
  instructions	
  and	
  inevitably	
  
triggered	
  a	
  brief	
  error	
  message	
  (see	
  figure	
  3).	
  We	
  later	
  changed	
  the	
  word	
  “Add”	
  in	
  the	
  instructions	
  
from	
  black	
  to	
  bright	
  red	
  hoping	
  to	
  increase	
  its	
  visibility,	
  and	
  we	
  ensured	
  that	
  the	
  error	
  message	
  
that	
  displayed	
  when	
  users	
  failed	
  to	
  select	
  “Add”	
  clearly	
  explained	
  how	
  to	
  correct	
  the	
  problem	
  and	
  
move	
  on.	
  We	
  also	
  observed	
  that	
  the	
  plus	
  signs	
  to	
  add	
  additional	
  authors	
  and	
  keywords	
  were	
  not	
  
visible	
  to	
  users.	
  We	
  added	
  feedback	
  that	
  included	
  both	
  text	
  and	
  icons	
  with	
  more	
  detail	
  (see	
  figure	
  
4).	
  However,	
  this	
  remains	
  a	
  problem	
  for	
  users	
  that	
  we	
  will	
  need	
  to	
  further	
  explore.	
  On	
  completing	
  
a	
  contribution,	
  users	
  receive	
  a	
  confirmation	
  page	
  that	
  thanks	
  them	
  for	
  the	
  contribution,	
  provides	
  a	
  
timeline	
  for	
  when	
  the	
  item	
  will	
  appear	
  on	
  the	
  site,	
  and	
  notes	
  that	
  they	
  will	
  receive	
  an	
  email	
  when	
  
it	
  appears.	
  Response	
  to	
  this	
  page	
  was	
  positive	
  as	
  it	
  succinctly	
  covered	
  all	
  of	
  the	
  information	
  the	
  
users	
  felt	
  they	
  needed	
  to	
  know	
  having	
  completed	
  the	
  process.	
  	
  



	
  

INFORMATION	
  TECHNOLOGIES	
  AND	
  LIBRARIES	
  |	
  SEPTEMBER	
  2015	
   53	
  

	
  
Figure	
  3.	
  Feedback	
  for	
  the	
  “Add”	
  Button.	
  

	
  
Figure	
  4.	
  Feedback	
  for	
  Adding	
  Multiple	
  Authors	
  and	
  Keywords.	
  



	
  

SELF-­‐ARCHIVING	
  WITH	
  EASE	
  IN	
  AN	
  INSTITUTIONAL	
  REPOSITORY	
  |	
  BETZ	
  AND	
  HALL	
  	
  
doi:	
  10.6017/ital.v34i3.5900	
  

54	
  

Modes	
  and	
  Loops	
  

The	
  final	
  two	
  components	
  of	
  microinteractions	
  defined	
  by	
  Saffer	
  are	
  modes	
  and	
  loops.	
  Saffer	
  
describes	
  a	
  mode	
  as	
  a	
  “fork	
  in	
  the	
  rules,”	
  or	
  a	
  point	
  in	
  a	
  microinteraction	
  where	
  the	
  user	
  is	
  
exposed	
  to	
  a	
  new	
  process,	
  interface,	
  or	
  state.23	
  For	
  example,	
  Google	
  Scholar	
  provides	
  users	
  with	
  a	
  
setting	
  to	
  show	
  “library	
  access	
  links”	
  for	
  participating	
  institutions	
  with	
  OpenURL	
  compatible	
  link	
  
resolvers.24	
  Users	
  who	
  have	
  set	
  this	
  option	
  are	
  presented	
  with	
  a	
  search	
  results	
  page	
  that	
  is	
  
different	
  from	
  the	
  default	
  mode	
  and	
  includes	
  additional	
  links	
  to	
  their	
  chosen	
  institution’s	
  link	
  
resolver.	
  Our	
  microinteraction	
  includes	
  two	
  distinct	
  modes.	
  Once	
  logged	
  in,	
  users	
  can	
  choose	
  to	
  
contribute	
  works	
  through	
  the	
  “Do	
  It	
  Yourself”	
  submission	
  that	
  we’ve	
  described	
  here	
  in	
  some	
  
detail,	
  or	
  they	
  can	
  choose	
  “Let	
  Us	
  Do	
  It”	
  and	
  complete	
  a	
  simplified	
  version	
  that	
  requires	
  them	
  to	
  
acknowledge	
  the	
  licensing	
  agreement,	
  upload	
  their	
  files,	
  and	
  provide	
  any	
  additional	
  data	
  they	
  
chose	
  in	
  a	
  free-­‐text	
  box	
  (see	
  figure	
  5).	
  The	
  majority	
  of	
  our	
  testers	
  specified	
  that	
  they	
  would	
  opt	
  for	
  
the	
  “Do	
  It	
  Yourself”	
  option	
  because	
  they	
  wanted	
  to	
  have	
  control	
  over	
  the	
  metadata	
  describing	
  
their	
  work,	
  including	
  the	
  abstract	
  and	
  keywords.	
  However,	
  since	
  launching	
  the	
  repository,	
  several	
  
submissions	
  have	
  arrived	
  via	
  the	
  “Let	
  Us	
  Do	
  It”	
  form,	
  which	
  suggests	
  a	
  reasonable	
  amount	
  of	
  
interest	
  in	
  this	
  mode.	
  

	
  
Figure	
  5.	
  The	
  “Let	
  Us	
  Do	
  It”	
  Form.	
  

Loops,	
  on	
  the	
  other	
  hand,	
  are	
  simply	
  a	
  repeating	
  cycle	
  in	
  the	
  microinteraction.	
  A	
  loop	
  could	
  be	
  a	
  
process	
  that	
  runs	
  in	
  the	
  background,	
  checking	
  for	
  network	
  connections,	
  or	
  it	
  could	
  be	
  a	
  more	
  
visible	
  process	
  that	
  adapts	
  itself	
  on	
  the	
  basis	
  of	
  the	
  user’s	
  behavior.	
  For	
  example,	
  in	
  the	
  RO@M	
  
submission	
  process	
  users	
  can	
  move	
  backward	
  and	
  forward	
  in	
  the	
  contribution	
  forms;	
  both	
  have	
  



	
  

INFORMATION	
  TECHNOLOGIES	
  AND	
  LIBRARIES	
  |	
  SEPTEMBER	
  2015	
   55	
  

“Previous”	
  and	
  “Save	
  and	
  Continue”	
  buttons	
  on	
  each	
  page	
  to	
  allow	
  users	
  to	
  navigate	
  easily.	
  The	
  
final	
  step	
  on	
  the	
  “Do	
  it	
  Yourself”	
  form	
  allows	
  users	
  to	
  review	
  their	
  metadata	
  and	
  the	
  file	
  that	
  they	
  
have	
  uploaded.	
  They	
  can	
  then	
  use	
  the	
  Previous	
  button	
  to	
  make	
  changes	
  to	
  what	
  they	
  have	
  entered	
  
before	
  completing	
  the	
  submission.	
  Ideally,	
  users	
  would	
  be	
  able	
  to	
  edit	
  this	
  content	
  directly	
  from	
  
this	
  review	
  page,	
  but	
  software	
  constraints	
  prevented	
  us	
  from	
  including	
  this	
  feature,	
  and	
  the	
  
“Previous”	
  button	
  did	
  not	
  pose	
  any	
  major	
  challenges	
  for	
  our	
  testing	
  participants.	
  Another	
  example	
  
of	
  a	
  loop	
  in	
  RO@M	
  is	
  a	
  “contribute	
  more	
  works”	
  button	
  embedded	
  in	
  the	
  confirmation	
  screen	
  that	
  
takes	
  users	
  back	
  to	
  the	
  beginning	
  of	
  the	
  microinteraction.	
  This	
  feature	
  was	
  suggested	
  by	
  one	
  of	
  
our	
  participants,	
  and	
  it	
  extends	
  the	
  life	
  of	
  the	
  microinteraction,	
  potentially	
  leading	
  to	
  additional	
  
contributions.	
  

DISCUSSION	
  AND	
  CONCLUSIONS	
  

Focusing	
  on	
  the	
  details	
  of	
  the	
  self-­‐archiving	
  process	
  in	
  our	
  IR	
  provided	
  extremely	
  rich	
  qualitative	
  
data	
  for	
  improving	
  the	
  user	
  interface,	
  while	
  analyzing	
  the	
  structure	
  of	
  the	
  microinteraction,	
  
following	
  Saffer’s	
  model,	
  was	
  also	
  a	
  valuable	
  exercise	
  in	
  thinking	
  about	
  user	
  needs	
  and	
  software	
  
design	
  from	
  a	
  different	
  perspective	
  from	
  standard	
  usability	
  studies.	
  The	
  improvements	
  we	
  made,	
  
based	
  on	
  both	
  Saffer’s	
  theory	
  and	
  the	
  results	
  we	
  observed	
  through	
  testing,	
  added	
  significant	
  
functionality	
  and	
  ease	
  of	
  use	
  to	
  the	
  self-­‐archiving	
  process	
  for	
  faculty.	
  Thinking	
  carefully	
  about	
  
elements	
  like	
  placement	
  of	
  buttons,	
  small	
  changes	
  in	
  wording	
  or	
  flow,	
  and	
  timing	
  of	
  instructional	
  
or	
  error	
  feedback	
  highlighted	
  the	
  big	
  effect	
  small	
  elements	
  can	
  have	
  on	
  usability.	
  	
  

However,	
  there	
  are	
  some	
  limitations	
  to	
  both	
  the	
  theory,	
  and	
  our	
  approach	
  to	
  testing	
  and	
  
improving	
  the	
  IR	
  that	
  affect	
  how	
  well	
  we	
  can	
  understand	
  and	
  utilize	
  the	
  results.	
  Of	
  particular	
  
concern	
  is	
  how	
  well	
  this	
  kind	
  of	
  testing	
  can	
  capture	
  the	
  UX	
  of	
  a	
  faculty	
  member	
  beyond	
  the	
  utility	
  
or	
  ease	
  of	
  use	
  of	
  the	
  interaction.	
  In	
  an	
  observational	
  study	
  we	
  can	
  rely	
  on	
  comments	
  from	
  
participants	
  and	
  key	
  statements	
  that	
  may	
  indicate	
  a	
  participant’s	
  emotional	
  or	
  affective	
  state,	
  but	
  
we	
  didn’t	
  include	
  targeted	
  questions	
  to	
  gather	
  this	
  data	
  and	
  focused	
  instead	
  on	
  the	
  details	
  of	
  the	
  
microinteraction.	
  We	
  didn’t	
  ask	
  how	
  they	
  felt	
  while	
  using	
  the	
  IR,	
  or	
  if	
  successfully	
  uploading	
  an	
  
item	
  to	
  the	
  IR	
  gave	
  them	
  a	
  sense	
  of	
  autonomy	
  or	
  competence,	
  or	
  if	
  this	
  experience	
  would	
  
encourage	
  them	
  to	
  submit	
  content	
  in	
  the	
  future.	
  Nevertheless,	
  improving	
  usability	
  is	
  a	
  solid	
  
foundation	
  for	
  providing	
  a	
  positive	
  UX.	
  Hassahzhal	
  describes	
  the	
  difference	
  between	
  “do-­‐goals”	
  
(completing	
  a	
  task)	
  and	
  “be-­‐goals”	
  (human	
  psychological	
  needs	
  like	
  being	
  competent,	
  or	
  
developing	
  relationships).25	
  While	
  he	
  argues	
  that	
  “be-­‐goals”	
  are	
  the	
  ultimate	
  drivers	
  of	
  UX,	
  he	
  also	
  
suggests	
  that	
  creating	
  tools	
  that	
  make	
  the	
  completion	
  of	
  do-­‐goals	
  easy	
  can	
  facilitate	
  the	
  potential	
  
fulfillment	
  of	
  be-­‐goals	
  by	
  removing	
  barriers	
  and	
  making	
  their	
  fulfillment	
  more	
  likely.	
  Ultimately,	
  
however,	
  a	
  range	
  of	
  user	
  testing	
  strategies	
  can	
  lead	
  to	
  improvements	
  in	
  a	
  user	
  interface,	
  whether	
  
that	
  testing	
  relies	
  on	
  carefully	
  detailed	
  examination	
  of	
  a	
  microinteraction,	
  analysis	
  of	
  large	
  data	
  
sets	
  from	
  Google	
  Analytics,	
  or	
  interviews	
  with	
  key	
  user	
  groups.	
  Microinteraction	
  theory	
  is	
  a	
  useful	
  
approach,	
  and	
  valuable	
  in	
  its	
  conceptualization,	
  but	
  it	
  should	
  be	
  one	
  of	
  many	
  tools	
  libraries	
  adopt	
  
to	
  improve	
  their	
  online	
  UX.	
  



	
  

SELF-­‐ARCHIVING	
  WITH	
  EASE	
  IN	
  AN	
  INSTITUTIONAL	
  REPOSITORY	
  |	
  BETZ	
  AND	
  HALL	
  	
  
doi:	
  10.6017/ital.v34i3.5900	
  

56	
  

Similarly,	
  focusing	
  on	
  the	
  UX	
  of	
  IRs	
  must	
  be	
  only	
  one	
  of	
  many	
  strategies	
  institutions	
  employ	
  to	
  
improve	
  rates	
  of	
  faculty	
  self-­‐archiving.	
  There	
  have	
  been	
  recent	
  studies	
  that	
  argue	
  that	
  regardless	
  
of	
  platform	
  or	
  process,	
  faculty-­‐initiated	
  submissions	
  have	
  proven	
  to	
  be	
  uncommon.26	
  Instead,	
  they	
  
suggest	
  that	
  sustainability	
  relies	
  on	
  marketing,	
  direct	
  outreach	
  with	
  individual	
  faculty	
  members,	
  
and	
  significant	
  staff	
  involvement	
  in	
  identifying	
  content	
  for	
  inclusion,	
  investigating	
  rights,	
  and	
  
depositing	
  on	
  authors’	
  behalf.	
  It	
  would	
  be	
  short	
  sighted	
  to	
  suggest	
  that	
  relying	
  solely	
  on	
  designing	
  
a	
  user-­‐friendly	
  website,	
  or	
  only	
  developing	
  savvy	
  promotional	
  and	
  outreach	
  efforts,	
  can	
  determine	
  
the	
  ongoing	
  success	
  of	
  an	
  IR	
  initiative.	
  Gaining	
  and	
  maintaining	
  support	
  is	
  an	
  ongoing,	
  
multifaceted	
  process,	
  and	
  largely	
  depends	
  on	
  the	
  academic	
  culture	
  of	
  an	
  institution	
  as	
  well	
  as	
  
available	
  financial	
  and	
  staffing	
  resources.	
  As	
  such,	
  user	
  testing	
  offers	
  qualitative	
  insights	
  into	
  ways	
  
that	
  processes	
  and	
  functions	
  might	
  be	
  improved	
  to	
  enhance	
  the	
  viability	
  of	
  IR	
  initiatives	
  in	
  tandem	
  
with	
  a	
  variety	
  of	
  marketing	
  and	
  outreach	
  	
  

REFERENCES	
  
	
  
1.	
  	
   “Welcome	
  to	
  ROARMAP,”	
  University	
  of	
  Southampton,	
  2014,	
  http://roarmap.eprints.org.	
  

2.	
  	
   Dorothea	
  Salo,	
  “Innkeeper	
  at	
  the	
  Roach	
  Motel,”	
  Library	
  Trends	
  57,	
  no.	
  2	
  (2008):	
  98,	
  
http://muse.jhu.edu/journals/library_trends.	
  	
  

3.	
  	
   Ibid.,	
  100.	
  

4.	
  	
   “The	
  Directory	
  of	
  Open	
  Access	
  Repositories—OpenDOAR,”	
  University	
  of	
  Nottingham,	
  UK,	
  
2014,	
  http://www.opendoar.org.	
  

5.	
  	
   Effie	
  L-­‐C	
  Law	
  et	
  al.,	
  “Understanding	
  Scoping	
  and	
  Defining	
  User	
  eXperience:	
  A	
  Survey	
  
Approach,”	
  Computer-­‐Human	
  Interaction	
  2009:	
  User	
  Experience	
  (New	
  York:	
  ACM	
  Press,	
  2009),	
  
719.	
  

6.	
  	
   Marc	
  Hassenzahl,	
  “User	
  Experience	
  (UX):	
  Towards	
  an	
  Experiential	
  Perspective	
  on	
  Product	
  
Quality,”	
  Proceedings	
  of	
  the	
  20th	
  International	
  Conference	
  of	
  the	
  Association	
  Francophone	
  
d’Interaction	
  Homme-­‐Machine	
  (New	
  York:	
  ACM	
  Press,	
  2008),	
  11,	
  
http://dx.doi.org/10.1145/1512714.1512717.	
  	
  

7.	
  	
   Marc	
  Hassenzahl,	
  Sarah	
  Diefenbach,	
  and	
  Anja	
  Göritz,	
  “Needs,	
  Affect,	
  and	
  Interactive	
  Products:	
  
Facets	
  of	
  User	
  Experience,”	
  Interacting	
  with	
  Computers	
  22,	
  no.	
  5	
  (2010):	
  353–62,	
  
http://dx.doi.org/10.1016/j.intcom.2010.04.002.	
  

8.	
  	
   International	
  Standards	
  Organization,	
  Human-­‐Centred	
  Design	
  for	
  Interactive	
  Systems,	
  ISO	
  
9241-­‐210	
  (Geneva:	
  ISO,	
  2010),	
  section	
  2.15.	
  	
  

9.	
  	
   See	
  Philip	
  M.	
  Davis	
  and	
  Matthew	
  J.L.	
  Connolly,	
  “Institutional	
  Repositories:	
  Evaluating	
  the	
  
Reasons	
  for	
  Non-­‐use	
  of	
  Cornell	
  University’s	
  Installation	
  of	
  DSpace,”	
  D-­‐Lib	
  Magazine	
  13,	
  no.	
  
3/4	
  (2007),	
  http://www.dlib.orghttp://www.dlib.org;	
  Ellen	
  Dubinsky,	
  “A	
  Current	
  Snapshot	
  of	
  
Institutional	
  Repositories:	
  Growth	
  Rate,	
  Disciplinary	
  Content	
  and	
  Faculty	
  Contributions,”	
  



	
  

INFORMATION	
  TECHNOLOGIES	
  AND	
  LIBRARIES	
  |	
  SEPTEMBER	
  2015	
   57	
  

	
  
Journal	
  of	
  Librarianship	
  &	
  Scholarly	
  Communication	
  2,	
  no.	
  3	
  (2014):	
  1–22,	
  
http://dx.doi.org/10.7710/2162-­‐3309.1167;	
  Anthony	
  W.	
  Ferguson,	
  “Back	
  Talk—Institutional	
  
Repositories:	
  Wars	
  and	
  Dream	
  Fields	
  to	
  Which	
  Too	
  Few	
  Are	
  Coming,”	
  Against	
  the	
  Grain	
  18,	
  no.	
  
2	
  (2006):	
  86–85,	
  
http://docs.lib.purdue.edu/atg/vol18/iss2/14http://docs.lib.purdue.edu/atg/vol18/iss2/14;	
  
Salo,	
  “Innkeeper	
  at	
  the	
  Roach	
  Motel”;	
  Feria	
  Wirba	
  Singeh,	
  A	
  Abrizah,	
  and	
  Noor	
  Harun	
  Abdul	
  
Karim,	
  “What	
  Inhibits	
  Authors	
  to	
  Self-­‐Archive	
  in	
  Open	
  Access	
  Repositories?	
  A	
  Malaysian	
  Case,”	
  
Information	
  Development	
  29,	
  no.	
  1	
  (2013):	
  24–35,	
  
http://dx.doi.org/0.1177/0266666912450450.	
  

10.	
   Hyun	
  Hee	
  Kim	
  and	
  Yong	
  Ho	
  Kim,	
  “Usability	
  Study	
  of	
  Digital	
  Institutional	
  Repositories,”	
  
Electronic	
  Library	
  26,	
  no.	
  6	
  (2008):	
  863–81,	
  http://dx.doi.org/10.1108/02640470810921637.	
  

11.	
  	
  Lena	
  Veiga	
  e	
  Silva,	
  Marcos	
  André	
  Gonçalves,	
  and	
  Alberto	
  H.	
  F.	
  Laender,	
  “Evaluating	
  a	
  Digital	
  
Library	
  Self-­‐Archiving	
  Service:	
  The	
  BDBComp	
  User	
  Case	
  Study,”	
  Information	
  Processing	
  &	
  
Management	
  43,	
  no.	
  4	
  (2007):	
  1103–20,	
  http://dx.doi.org/10.1016/j.ipm.2006.07.023.	
  

12.	
  	
  Suzanne	
  Bell	
  and	
  Nathan	
  Sarr,	
  “Case	
  Study:	
  Re-­‐Engineering	
  an	
  Institutional	
  Repository	
  to	
  
Engage	
  Users,”	
  New	
  Review	
  of	
  Academic	
  Librarianship	
  16,	
  no.	
  S1	
  (2010):	
  77–89,	
  
http://dx.doi.org/10.1080/13614533.2010.5095170.	
  

13.	
  	
  Dan	
  Saffer,	
  Microinteractions:	
  Designing	
  with	
  Details	
  (Cambridge,	
  MA:	
  O’Reilly,	
  2013),	
  2.	
  

14.	
  	
  Ibid.,	
  3.	
  

15.	
  	
  Ibid.,	
  2.	
  

16.	
  	
  Ibid.,	
  142.	
  

17.	
  	
  Jakob	
  Nielson,	
  “How	
  Many	
  Test	
  Users	
  in	
  a	
  Usability	
  Study?”	
  Nielsen	
  Norman	
  Group,	
  2012,	
  
http://www.nngroup.com/articles/how-­‐many-­‐test-­‐users.	
  	
  

18.	
  	
  Saffer,	
  Microinteractions,	
  48.	
  

19.	
  	
  Ibid.,	
  82.	
  

20.	
  	
  Steve	
  Krug,	
  Don’t	
  Make	
  Me	
  Think:	
  A	
  Common	
  Sense	
  Approach	
  to	
  Web	
  Usability	
  (Berkeley,	
  CA:	
  
New	
  Riders,	
  2000).	
  

21.	
  	
  Saffer,	
  Microinteractions,	
  64.	
  

22.	
  	
  Ibid.,	
  86.	
  

23.	
   Ibid.,	
  111.	
  

24.	
  	
  “Library	
  Support,”	
  Google	
  Scholar,	
  http://scholar.google.com/intl/en-­‐
US/scholar/libraries.html.	
  	
  



	
  

SELF-­‐ARCHIVING	
  WITH	
  EASE	
  IN	
  AN	
  INSTITUTIONAL	
  REPOSITORY	
  |	
  BETZ	
  AND	
  HALL	
  	
  
doi:	
  10.6017/ital.v34i3.5900	
  

58	
  

	
  
25.	
  	
  Hassahzhal,	
  “User	
  Experience,”	
  10–15.	
  

26.	
  	
  See	
  Dubinsky,	
  “A	
  Current	
  Snapshot	
  of	
  Institutional	
  Repositories,”	
  1–22;	
  Shannon	
  Kipphut-­‐
Smith,	
  “Good	
  Enough:	
  Developing	
  a	
  Simple	
  Workflow	
  for	
  Open	
  Access	
  Policy	
  Implementation,”	
  
College	
  &	
  Undergraduate	
  Libraries	
  21,	
  no.	
  3/4	
  (2014):	
  279–94.	
  
http://dx.doi.org/10.1080/10691316.2014.932263.