CORE EXTERNAL REQUEST
111111111111111 II
COR-10047483
QUSQ
CSIRO -- World Wide Web -- 2004- Current, delayed 180 days
International journal of distance education technologies
Ariel email: ariel@usq.edu.au Ariel IP: 139.86.208.56 ILL email: libdds@us
Library
Australia
ATTN:
PHONE:
FJl.X:
E-Mfl.IL:
07 4631 2462
07 4631 2920
COR Core
TITLE:
VOLUME/ISSUE/PAGES:
DATE:
AUTHOR OF ARTICLE:
TITLE OF ARTICLE:
ISSN:
MJl.X COST:
COPYRI GHT COMP. :
SHELF Mfl.RK:
DELIVERY:
REPLY:
Copy
SUBMITTED: 2010-03-15 10:19:41
PRINTED: 2010-03-15 11:59:23
REQUEST NO. : COR-10047483
SENT VIA: ISO
EXPIRY DATE: 2010-03-24
EXTERNAL NO. : 279123
Journal
INTERNATIONAL JOURNAL OF DISTANCE EDUCATION
TECHNOLOGIES
5 (3) 8-23
2007
Clark, Damien and Baillie-de Byl, Penny
ENHANCING THE IMS QTI TO BETTER SUPPORT
COMPUTER ASSISTED Mfl.RKING
1539-3100
$20.00
Fair Dealing - S49
Online access. Available for document delivery.
Restrictions apply.
E-mail attachment: libdds@usq.edu.au
Mail:
This document contains 16 pages. You will be invoiced for $13.20. This
is NOT an invoice.
.•... . IGTPUBLISHING .. ' ". . ItJ3,737,_
/(iff' . 701 E. Chocolate Avenue, Suite 200, Hershey PA 17033-1240,"USA. '
. - '. Tel: 717/533~8845;Fax 717/533-8661; URL-http://www.igi-ptib.com
. ,: .. ; .. ', . '",J' . ' • • ' . "..". ' .•• ' .... ', •
:rhis paper appears in the publication, International Journ;ll.of Distance Educiluon Techuologies, Volume 5,
-' '.' Issue 3 edited' by Shi-Kuo' Chang ant! Tiinothy K. Shih © 2007, iC:I Global . .
' .. : '.1
Enhancing the IMS QTI to
.Better Support Computer
Assisted Marking
Damien Clark, Central Queensland University, Australia
Penny Baillie-de Byl, University of Southern Queensland, Australia
ABSTRACT
.:";
Computer aided assessment is a common approach used by educational institutions. The ben-
efits range into the design of teaching, learning, and instructional materials. While some such
systems implementfully automated markingfor multiple choice questions andfill-in-the-blanks,
they are insufficient when human critiquing is required. Current systems developed in isolation
have little regard to scalability and interoperability between courses, computer platforms, and
learning management systems. The IMS Global Learning Consortium:SO open specifications for
interoperable learning technology lack fonctionality to make it useful for computer assisted
marking. This article presents an enhanced set of these standards to address the issue.
Keywords: assessment; computer aided assessment; computer assisted marking; distance
education; educational technology; internet-based technology; interoperable
learning technology; rubrics; technological innovations; xml
INTRODUCTION
Computer aided assessment (CAA), one of
the recent trends in education technology, has .
become common-place in educational institu-
tions as part of delivering course materials,
particularly ,for large classes. This has been
driven by many factors, such as:
The need to reduce educational staff work-
loads (Dalziel, 2000; Jacobsen & Kremer,
2000; Jefferies, Constable et aI., 2000;
Pain & Heron, 2003; Peat, Franklin et aI.,
2001);
A push formoretimely feedbackto students
(Dalziel, 2001; Jefferies, Constable eta1.,
2000; Merat & Chung, 1997; Sheard &
Carbone, 2000; Woit & Mason, 2000);
Reduction in educational material de-
velopmentand delivelY costs (Jefferies,
Constable et al., 2000; Muldner & Currie,
1999); and,
Copyright © 2007, 1Gl Global. Copying or distributing in print or electronic forms without written permission of 1Gl Global
is prohibited.
. ~ ....
,'.,' <
....... " ..
I
I
Journal of Distance Education Technologi¢s, 5(3), 8"23, July-September20()7 ~
• ... TI~e proliferation of oniine education sessment types and can oftenrequire,significarit.
(Wh~te, 2000). ··time to develop the model solution. In addition,·';'
Internet~basedtechl1ologies inCAAcan be
bi'oadly categorised into the following system
types: online. quiz systems, fully automated
. marking, and semiautomatedlcomputer assisted ..
marking systertis:The most common fortnof
CAA,online quizzes, typically consist ofmul~
'iiple choiCeqiIestions (MCQ) (11v):8,2000); as
they can be automatically marked. Yet, there is
much conjecture onthe effectiveness ofMCQs,
particularly in the assessment of Bloom 's higher
learning outcomes (1956) such as analysis, syn-
thesis, and evaluation (Davies, 200 1). This limits'
th~ scope by which a student's abilities can be
assessed. Short response and essay type ques-
tions are commonly used to assess the higher
order skills of Bloom's taxonomy., Still, these
types of assessments are time consuming to mark
manually (Davies, 2001; White, 2000).
A more ambitious approach to CAA
involves the use of fully-automated marking
systems. These can be defined as systems that
can mark electronically submitted assignments
such as essays (Palmer, Williams et aI., 2002)
via online assignment submission management
(OASM) (Benford, Burke et aI., 1994; Dar-
byshire, 2000; Gayo, Gil et aI., 2003; Huizinga,
2001; Jones & Behrens, 2003; Jones & Jamie-
son, 1997; Mason & Woit, 1999; Roantree &
Keyes, 1998; Thomas, 2000; Trivedi, Kar et aI.,
2003), and automatically generate a final grade
for the assignment with little to no interaction
with a human marker. The obvious benefit
to this approach is the ability to assess some
higher order thinking as per Bloom's Taxonomy
(1956) in a compfetely automated manner,
thus improving marking turn-around times for
large classes. Fully automated systems inelude
. MEAGER, which is designed to automatically
mark Microsoft Excel spreadsheets (Hill, 2003),
automatic essay marking systems, such as those
evaluated by Palmer, Williams et aI. (2002),and
English and Siviter's system (2000) designed
to assess student hypertext mark~up language
(HTML) Web pages, to name a few. Unfortu-
nately,this approach is not suitable for all as~
most of the automated functionality examines
students' solutions against model solutions. This,:.
may lead t6 issues l'elating to marking quality'
when it is impossible for the assessment creator
,to identi1}r all possible solutions. '
.. The last approach is the use of semiauto-
mated or computer assisted marking (CAM).
This is a compromise between online quiz and
fully automated systems. CAM assists with
the reduction of poor marker consistency and·
the quantity and quality of feedback in mark~
ing team situations. By using CAM, many of
the laborious and repetitive tasks associated
with marking can be automated (Baillie~de,
By I, 2004), resulting in more timely retllius
to students. CAM describes systems that have
some components of the marking process au-
tomated, but ,still require at least some human
interpretation and analysis to assign grades.
For example, CAM systems have been devel-
oped to support the routine tasks associated
with marking programming assignments, like
compilation and testing of student submitted
programs (Jackson, 2000; Joy & Luck, 1998).
Although allocation of a final grade is the sole
responsibility of the marker, this determination
'can be achieved faster, with greater accuracy
and consistency, by relying on the results of
automated tests (Joy & Luck, 1998). In cases
where hUman interpretation and analysis occurs;
this is referred to as manual marking.
One example of CAM is implemented in
the Classmate system. It is designed to assist
in automating many of the typical laborious.
tasks associated with marking, such as reh'ieval
and presentation of submissions, feedback and
grade storage, application oflate penalties, and
student returns (Baillie-de By], 2004). Other
contributions in this area inelu.de an MS-Word'
Integrated CAM Template (price & Petre, 1997),
development of a CAM p'rototype based on
research into how markers rate programming
assignments (Preston & Shackleford, 1999),and·,
Markin, a commercial CAM product by Creative
Technology (Creative-Technology, 2005).
Copyright © 2007, IGl Global, Copying or distributing in print or electronic fonns \\~thout written pennission of IGI Global
is prohibited.
i:
,I
",I
I
I
.. 10 .Journal of DistanceEducationTechnologies, 5(3); 8~23; July-September 2007
. .
,.On.eofthe :major problenls with. current .. suppcirtJor human intervention arid critiquing.'
. CAM systel'nsisthatmuch'ofthe work is be-" Its' architecture ensures it remains backward' ,'''"
. ing Ulldertaken by independent or small groups compatible with the existing QTI specification. . •...... .
•. of researchers ·who.·m·e developing systems to . ThiseriSUl:eS existing QTIXMLdocwnents can··.·· .
,: . serVice the rntrticularneeds of their courses and be validated against QTlCAM. Furthermore;
institutions, v,iithoutregard fOl" intel'operability.. .' the QTICAM specification allows a mixture of
.' TheIMS globalleamingconsortiUl1l elMS, automatic and manually marked items within;
. 200S} are addressing this problem tJ;rrough the' ·.·thesame assessment The QTICAM provides
production of open specificatioilsfor interoper-. improvements to both the ASl binding andRR •.
'a:blelearning technology, ,and have. developed . biudingas outlined in the following sections'.
a well adopted specification (IMS, 2004). The. A more complete description for the IMS QTI
IMS qu~stion& testinteroperability (QTI) spec- ASI (IMS, 2002a), and the IMS QTI RR (lMS, .
ification provides an interoperable standard for . 2002b) can be accessed from the IMS Web site
describing questions and tests using extensible (http://www.imsglobal.org).
mark~up language1 (XML) (IMS, 2000). The
QTl specification is broken down into mUltiple
subspecifications. Two of significance to the
research herein are the assessment, 'sections
cmditems (ASI) and the results reporting (RR)
bindings. The ASI binding is used to describe
the materials presented to the student, such as
which questions, called items, form part of an
assessment, how they are marked, how scores
are aggregated, and so forth. The RR binding
is responsible for describing students' results
following completion of the marking process.
A major focus of the design for the QTI
to date has been to support the interoperability
of online quiz systems. These systems are typi-
cally fully automated and require little hUl1lan
intervention. Thus, the QTI lacks specific func-
tionality for online systems providing student
assessment that relies heavily on human inter-
. ventionand critiquing. By enhancing the IMS
QTl specification to better support CAM, tools
can become interoperable, such that assessment
materials can be exchanged between CAM
systems in the same way as quiz question banks
'can between online quiz systems. The research·
,. presented in this paper introduces the QTICAM
specification. addressing the shortcomings of
the IMS QTI in support of-CAM.
·QTI COMPUTER ASSISTED
MARKING SPECIFICATION
The QTI Computer Assisted Marking (QTI-
CAM) specmcation has been designed as an
extension to the lMS QTI to address the lack of
.....
Mark Increments
The QTI provides scoring variables to track the
marks' associated with an assessment questiOli
These scoring variables can be aggregated in
various ways to derive a total score. for the
students' work. For example, the XML:
declares a variable with 2 called SCORE
to store a result. In this case, the result is re-
stricted to a whole nUl1lber (decimal) between
and inclusive of the values 0 and 10.
This current format, while dictating sonie
bOUl1daries for a marker, does not restrict the
marker from using their own part~markil1g
scheme between the minimum and maximUl1l
values. The QTICAM provides the increment
attribute to address this issue. For example, if
.the previous result should only be marked in
increments of2, the XML would be:
This enhancement provides two advantag·es. ,
Firstly, it improves the consistency in marl ... :r ...... .
,
' . ' ".
clarkd'
' ,-, ",
. :: ..
'Manual Marker Rubrics , ,.; . ~ '. , ;.: ;'.
-IiI' addition to expressing the response'process-' The, element content re-
o 'irig:of ail item in machine'temls, the QTICAM 'usestheexisting , ,
,', also supports response'processing for human and elements of the Qn RR '
", interpretation'viaa matkingrubric., The 3 element structure £i·om the QTI' scribe the student. If an iteril has notyetb~en, "
ASlhas been reused to describe such'marking "marked;therewill beno element
rubrics within the QTICAMASI. For each element, there is a matching scoring CUlTently, the QTICAM does not support
variable. The scoring variable is llsed to track the recording of multiple markers. Such an
the performance of the student against its rubric instance might occur in a peerrevision process
within the element. There are no where several markers are assigned the task of:
facilities for recording rubrics within the QTI providing a score forthe same item. The authors
RRforthemarker. Therefore, an recognise the need for this feature and expect
element has been included in the QTICAM RR to implement it in future revisions.
binding. This is demonstrated in Listing 1, along
with its scoring variable SCORE.
The contents ofthe element
structure are derived from the ele-
ment oftheASI binding. The varname attribute
defines the scoring variable SCORE with which
the rubric is associated. This
is illustrated at the bottom of Listing 1 using
the 4 element, highlighted In,bold. The
example is a marking rubric for an IT-related
short response question. Students are asked
to briefly compare flat and hierarchical dii"ec-
tory structures provided by network opera~ing
systems.
Recording the Marker
Typically, the·QTI is used to describe objective
tests that will be marked by computer. With
manual marking, it is necessary to record the
identity of the marker for quality control. The
allocation of student assessments among, a
group of markers can vary. For example, as-
sessments can be allocated by student or by
individual questions. The QTlCAM therefore
requires the ability to record the marker of
each individual item. Thus, using QTICAM
RR XML achieves this:
Recording Marker Feedback and
Marks
The QTI RR binding provides support for the
element structure which
, identifies feedback already displayed to the'
student, as a result of automated marking. This
feedback is fixed and prescribed in the ASI
XML when the item is conceived. This further
illustrates the focus of the QTI on automated
marking systems. It is not possible for the item
author to foresee all potential errors made by ,
students, and therefore it is necessary to provide
support for feedback not prescribed within the
item definition (QTIASI). To support this func-
tion, QTICAM includes the
container element. All feedback and marks are
stored within this structure, as demonstrated
in Listing 2.
Within are elements. Each can con-
tain a feedback cOlinnent «comment», a mark
«score_value» or both. Each
is associated one-to-one with a scoring variable,
through the varn'ame attribute. This providesan
importlinkage. It allows a comment or mark '
to be associated with a specific rubric «inter-
pretstore> ). Fmihermore, each
Copyright © 2007, lGl Global. Copying or distributing in print or electronic fonns without written pennission of lGI Global
is prohibited.
,"' .... ".
""" :',' ., .•..
..•. ; .• "'1:'
"\:
i
)
". . ~ ..
· '12:.Journal of Distance.EduGatibn Technologies, 5(3); 8-23, .July-September.2007
. Listingl. ManuahnarkerRubric (QTICAM RR)
.:, . . :.
•. :." \
. ···
. ·. :.; .
A hierarchical directory structure is considerE;!d superior.for enterprise
networking ... ' .
.' '.'
·> .'
.. >,.
A flat direCtory structure is slower and less efficient thana hierarchical
direCtory structure,
.
It is much.harder to find things in a flat directory structure than in a
. hierarchical directory structure.
One mark is allocated for each point above that the student has in their answer.
O
1
O
3
. ('
is also uniquely identified within the scope of
the item through the ident attribute. The ability
to uniquely identify each comment or mark is
described in the following section ..
paper-based submission, providing comments
and marks in proximity of the passages being
addressed. This is achieved in the QTICAM, .
as illustrated in Listing 3.
The solution provided by the student
Linking Feedback and Marks to already stored within the QTI RR element is copied verbathn into the
'Feedback on student assessment is an imp or- element. Next, passages .
. . tant element of the learning process (Da~ziel, .. of the student's response are tagged with the,
· 2001).Anovel approach to improving feedback element. Recall from Listing 2 .
presentation in CAM systems was investigated each element had an ident at-
by Mason, Woitet aL (1999) where feedback tribute. Listing 3 shows the linkage of this ident
is provided in-ccintext·ofthestudents' submis- . attribute with the element's ident
sian, rather than summarised at the end. This is attribute. This linkage is how a comment or
equivalent to the way a marker would assess a
Copyright © 2007, 1Gl Global. Copying or distributing in print or electronic fonns without written pennission of 1GI Global
is prohibited.
,.: .':
'" ..... 1' .• "
.. ' . Journa.i of Distance Education Technologies, 5(3), 8-23, July-September 2007. 13 .
. mark isassociated'in-contextwith the student's
',' response. Therefore, the comment:,.
, One ciutput line transmits the data and the other
transmits the complement of the signal
--,',' .,' .
from Listing 2 is associated with the student
passage
while RS-422a has two data. output lines. ' '
from Listing 3.
This feedback can be
presented to the student in various ways. For
example, ifpresented in a Web-browser, the ma~
terial within a element could be
a hyperlink to a popup window which displays
the comment or m ark. Alternately, amouseover
javascript event could present the comment
or mark when the student places their mouse
over the area. If the feedback
is to be printed, the comments or marks could
be placed at the start or end of the underlined
material. How the material is
presented is 1;lP to the iniplementer. The QTI-
CAM ensures comments or marks are provided
in-context ofthe student's solution.
Recording Question Content
Presented to the Student
The QTI RR binding does not include support
for recording the question material that was
presented to the student in completion of an
item. To support the manual marking process;
it is advantageous for the marker to see exactly
what was presented to the student. This provides
complete context for the student's solution.
Furthermore, it is also necessary where pa-
rameterised questions are implemented (Clark,
2004). The QTICAM RR binding provides the
.element. This element
should contain all the material that was pre-
sented to the student when they attempted the
question, in HTML format. An example of the
elenient looks like:
,
lnyour own words briefly compare'
flat and hierarchical directory structures'
provided by NOS.
' ',' ,
ll>
Useofa CQATN node is recommended
to quote all HTML elements within the element as illustrated. Thi~ , '.
material can be presented to the marker when
marking the students' solutions. '
Recording a Model Solution for an
Item
The QTI RR binding provides support for
recording the solution to an item through the
, element. This element is de-
signed tojdentify a selectable choice or amodel
answer. Unfortunately, this element provides
for only a textual value with no fonnatting. To
improve readability for the manual marker, the
element is provided in the
QTICAM RR binding. The
element is illustrated in Listing 4.
The element incorporates
the 6 elementused throughout the QTI
specification to provide basic formatting ofma~
terial for presentation. This allows the question
author to provide a model solution to an item
with basic formatting. The solution shown in
Listing 4 is for a C programming item.
QTICAM Implementation
The design of the QTICAM is implementation
independent, meaning it does not constrain or
dictate how a CAM tool shouldbeimplemented.
It provides the supporting datamodel ofhow m~
terial from a testing system should be exchanged
for marking. Therefore, an implementation of
, QTICAM could be written in various hmguages. :
such as Java, Perl, 01' C++. Furthermore, a CAM,
tool could be implemented as an online 01' off-
lineapplication.Forexample, an online marking ,
. toolwbuldmaintainaconnection withanetwork
server aIi.dexchange QTICAMXMLas required
during marking. In an off-line environment, the
marking tool would download large batches of
Copyright © 2007, 1GI Global. Copying or distributing in print or electronic fonns without written pennission of 1GI Global
is prohibited.
. I
.... :.
I
I .·14 . Journal of Distance Education:Technologies; 5(3),.8"23, July"September 2007 -
Listing2. Recording markerfeedback and marks (QTICAM RR)
---
_ _
O. 5·
O.5 "
.
1
~scorefeedback varname="SCORE" ident="5">
O. 5
Orie output line transmits the qata and the otlier transmits the complement of
the .
signal.
O.5
Refer to the model solution for other factors you have not considered.
. . .
. Listing 3. In-context feedback of a student:SO response (QTlCAM RR)
.
RS-232 has a slow
data rate of 19.6 kbps.. . _
lt is <;llso only capable qf distances up to i 5. metres. . - - .
RS-422a is capable of much faster transfers.
RS~232 is unbalanced, while RS-422a is balanced.
tag response> .
RS-232 has one signal wire, while RS-422a has two data output lines.
· .-
Copyright © 2007, IGl Global. Copying or distributing in print or electronic fonns without writteu pennission of 1Gl Global
is prohibited.
:.,'."
",. ;
----------_ .. _-------------------------------------------------
i
I
Journal of Distance Education Technologies; 5(3), 8-23, July-September 200715
Listing 4. Record ofth'e,niodel solution for an item (QTICAMRR binding)
' , '
void replaceAII(char *aString, char *c1, char c2)
{ ,
char *ptr;
ptr"; aString;
while{*ptr 1= '\0')
{
}
}
if (*ptr = c1)
*ptr = *c2;
ll>
QTICAM XML assessments. This could then
be taken off-line during the marking process.
Off-line inlplementation is ofpalticular benefit
to those with poor bandwidth such as analogue
modem users, or for those with a roaming lap-
top. Alternately, a hybrid approach could be
implemented where the marking tool supports
both online,and off-line operation.
The following section introduces the
computer assisted marking prototype (CAMP),
which demonstrates the use of the QTICAM
specification.
CAMP: PROTOTYPE MARKING
TOOL
To demonstrate the QTICAM specification at
, work, the CAMP system has 'been developed.
CAMP is aCAM tool implemented in Java. It is
currently a prototype and not'yet optimised for',
complete usability. However, it demonstrates the
features oftheQTICAM specification. CAMP
makes use of the XML document object model
(DOM) application' programming interface
(API)? to manipulate the QTICAM RR XML
containing the material that is to be marked.
It can load multiple RR XML files, which it
stores in memory. As an item is marked, the
changes are kept in memory. Once the marker
clicks the save button, moves onto another item,
or otherwise closes the application down, the
changes in memory are written to their respec-
tive XML file.
The CAMP tool supports the following
functions:
The ability to open multiple QTICAM RR
XMLdocuments and display ahierarchical
tree structure, which smnmarises all items
broken down into sections and student as-
sessments.
For each item loaded, it displays:
the material presented . to the stu-
denf;
the student's submission/s;
an optional model soiutiori;
all the marking rubrics;
the student score for the item;
the student score for the assessment;
and
Copyright © 2007, 1GI Global. Copying or distributing in print or electronic fonus without written pennission of 1GI Global
is prohibited,
, 16' Jour(lal of Distance Education Technolbgies, ,5(3), 8-23, July-September 2007
the. student and marker's ~ames.
The abilityforthe marker to tag passages of
th~ ~tudent's solution and attachieedback
with a comment or mark.
The modification of the comments ;md
"iriarks by click~lg oii'an existing tagged
passage.
The deletion of existing comments and
inarks by clicking on an existing tagged
passage.
The saving of changes back to the XML
file during the marking process.
The flagging of an item as marked when
marking is complete.
Automatic aggregation of marks is sup-
pOlied, totaling scoring variables for rubrics
and item, section and assessment scores. Figme
1 illustrates the process of assigning feedback
to a student's solution using CAMP.
This figure highlights the functionality
provided by the QTICAM: (a) the assessment
question; (b) the marking rubric; ( c) the student's
assessable answer where the marker has high-
lighted the passage more manageable for feed-
back, before clicking the Add Feedback button
to present the feedback dialog (d). The dialog
allows the marker to assign only a legitimate
mark (0 .5) within the bounds for the item and a
comment: Each part is mo;-e manageable than
the whole. Placing the mouse over the tagged
passage more managable in (c ) will display (e ),a
popup window showing the recorded feedback
for that passage; and (f) The total score of the
item and Fred Smith's assessment score before
the 0.5 mark was assigned.
To elaborate futher; Figure 1 l?h6w8 that'
the marker has highlighted the passage more
'managable from the student's solution. To
open the dialog box shown in Figure l(d),the
, marker clicks the Add Feedback button. This
, dialog allows the marker to select the rubric to
wbich their comment ormark is associated; On
selecting the required rubric, the marker can
only enter a mark that meets the constraints
of the rubric. For example, the marker cannot
assign a mark that would push the total for the ",
rubric beyond its upper or lower limits defined
in the QTICAM. Inthis case, the rubric score','
has been' specified with:
It restricts the assigned mark to values'
between 0 and 3 with increments of 0.5. This
improves consistency in the marking andmakes
it quicker for the marker to select a mark. The
, dialog also contains a list of comments (Feed-
back HistOlJI) made previously by this marker
for the same item answered by other students.
This helps with consistency in feedback and'
efficiency 'In allowing the marker to reuse
comments. On selecting a comment from the
drop down list, it is placed in the Feedback text
area at the bottom of the dialog. The marker
can choose to customise the comment if they
wish. Alternately, the marker can create a new
comment by typing directly into this empty
text area.
On feedback completion, the associated
passage from the student's solution (originally
highlighted by the marker) appears underlined
to indicate it has feedback associated with it,
and the QTICAM RR XML for this item has
changed, as illustrated in Listing 5.
The code presented in bold illustrates the
changes made to the XML file once a marker
has provided feedback using CAMP. ,
When item marking is complete, the
Completed tick box at the bottom of Figure 1
is selected. By forcing the marker to make the
conscious decision to flag an item as complete, ,
this ensures items are not overlooked, when
for example, a marker moves fi'om one item'
to another comparing different students' SOhb
tions. When an item is flagged as ullll1arked; it
is represented in QTICAM RR XML as:' '
Copyright © 2007, 1Gl Global. Copying or distributing in print or electronic forms without written pennission of 1Gl Global
is prohibited,
· . Journal of Distance Education Technologies, 5(3), 8-23, July-September 2007 . 17
FigureL CAMP: Selecting passage for feedback
VVhy was th6 OSIReferenceMod~'
develope4? VVhy are the layers of the OSlo so
important? .
(a)
The OSI Model was developed to provide
op en interc onnection between hetero genious
systems. It divides the task of network
communication into separate
components .. This makes t!J:e
Copyright © 2007, 1GI Global. Copying or distributing in print or electronic forms without written permission of IGl Global
is prohibited.
~. , :, '" '::.:
.. '
I
I
'j
18. ,Journal of Distance Education Technologies, 5(3), 8-23, July-September 2007 .
ListingS.: QTICAM RR XML: Changes io XML after addingfeedback ...• ~. J •
,:.·.· " .,' ,....,. . '.' .. .
The OSI Model was developed to provide open.in-
terconnection between. heterogeneous systems. It divides the task of
. network c.ommunicatiol! in'to separateC(lmponents. This makes the commu-
nicationprocess more managable. lt also allows different functions to be implemented by separate entities and yet still
remai~,interoperqble,.
'
'. .
.
O.5
.
O.5
Other points to consider include that each layer is independent and that each
part is more manageable than the whole. The layers are also distinct functions. Good effort.
comment>
. Each part is more manageable than the whole.
O.5 .
Unmarked
When a tick is placed in the Completed tick
box, the XML is chi;lnged to:
Marked
The markernavigati on windo~, as illustrat-
. edinFigure2(a),shows that question CommQ1 ..
of Section Part A has now been marked.
This window gives a hierarchical view of
all student assessments that have been loaded
into memory. Once aI1 entire branch of the hi-
erarchy has been completely marked, its parent
branch will also be flagged as marked. This is
demonstrated in Figure 2(b).
When sectionP art B is marked, tlus will flag
the entire assessment Sample Multi-discipline
assignment for Fred Smith as marked, in the
same manner. This allows the luarker to see at
a glance what remains to be marked from llieir
allocation of student assessment.
CONCLUSION
QTlCAMis an' enhancement of the IMS QTI
specification aIld provides support for interoper-
able computer assisted marking. Its functional~
ityhas been illustrated via llie demonstration
of CAMP. Features ofllie QTlCAM include:
support for limiting mark increments, inclusion
of human readable marking rubrics, ability
to record the marker for each marked item,
Copyright © 2007, IGI Global. Copying or distributing in print or electronic fonus without written penuission of IGI Global
is prohibited.
, .~. ~'." . '.
'. Journal6fDistance EdiJcation Technologies,. 5(3), 8-23, July-September 20071.9
Figure 2. CAMP:. Navigation window flagging marked items.
'Fred Smith (q91234567): Sample Multi-discipliJ:1e assignment
? Id PartA:Communic ' iO'n shorULong Answer Questions
i- F [J16ommQ1i(frl?~{
; ./. D CommQ2
': D CommQ3
D CommQ4
! l. •. D CommQ5 (a)
i
'l' Ll Part B: C Programming
I··· D progQ6
. DProgQ7
D ProgQS
Fred Smith (q91234567): Sample Multi~discipline assignment .'
q5 P~rtA: Communication ShorULong AnSWerQUestion~
j •• D CommQ1 (marked)
. D CommQ2 (marked)
, D CommQ3 (marked) (b)
i~· D CommQ4 (marked)
~. L5 E rtC:~~~5g:~~~~ind~]~
DProgQ6
DProgQ7
D ProgQS
Copyright © 2007, 1Gl Global. Copying or distributing in print or electronic fonns without written pennission of 1Gl Global
is prohibited.
.",,' ", '.'"
", < ..
,. '20 . Journal of Distance Education Technologies, 5(3),8-23, July-September 2007 ..
. 'recording manual marker feedback includmg
comments and marks, linking marker feedback
:to passages of the students' solutions,record-
. ·/,iiig the l11atedal jJreserifed.tOtlie' studelit in th~"
. ' ',results repolt, and the ability to recordfolTIlatted
:' niod'el solutions for iteills.
One of the main benefits for markers in the
use of CAM software is increased productivity,
through automation of repetitive mechanical:
tasks (Joy & Luck, 1998). Such benefits include:',
automatic collation of marks atthe item, section;'
and assessment levels, and the ability to easily
, reuse feedback conmlents by selecting from a '
list. Another major benefit to CAM software
is improved quality. For exampl~, typically a
marker will, after completion of marking, add,
the marks assigned and record the total on a,
marking sheet. This manual process introduces,
a high risk of error duringthe addition and tran - .,
scription of the marks. Through CAM, marks
cail be coilated and recorded automatically,
eliminating this quality issue. Other benefits
to CAM include:
Improved marking consistency: providing
constraints on scoring variables ensures the
markers assign marks consistently within
the scope of the marking rubric
Manual handling of results is eliminated:
results from student assessments can be
automatically uploaded into aLMS reduc-
mg staff workload and errors
• . Improved marking feedback: permitting
the marker to' associate feedback with
passages of the student's solution allows
the student to interpret the feedback in the
context of their own work (M!lson, Woit et
aI., 1999)
Potential to automate correction of mark-
ing errors across large assessment collec-
tions
The QTI CAM specifi cation cUlTently adds
essential SUppOlt to the QTI for computer as-
sistedmarking. Future development will see the
inclusion of advanced featUl'es that will:
, Autoniate late submission penalty applica-
tion
, Share feedback between multiplenlark-
ers ,",
, Classify l11arkers 'comments for· .later , .
analysis'
Automate marking moderation
With the adoption of an interoperable CAM
specification such as QTICAM, interoperable
CAM applications can be a reality.
REFERENCES
Baillie-de Byl, P. (2004). An online assistant for
remote, distributed critiquing of electronically
subm itted assessment. Educational Technology
and Society, 7(1), 29-4l.
,Benford, S.D., Burke, E~ K., FoxIey, E., Higgins, C.A.
(1994).Acourseware systemforthe assessment
and administration of computer programming
courses in higher education. Complex Learning
in Computer Environments (CLCE'94) ,
Bloom, B.S. (1956). Taxonomy of educational ob-
jectives handbook 1: Cognitive domain. New
York: Longman, Green, & Co.
Clark,D.(2004).EnhancingtheIMSQ&TIspecifica-
tion by adding support for dynamically generat-
ed parameterised quizzes (p. 230). Toowoomba,
University of South em Queensland: Department
of Mathematics and Computing.
Creative-Technology (2005, January 16). Program
features. Retrieved March 8,2007, from http://
www.cict.co.uklsoftware/markinlindex.htrn.
Dalziel, 1. (2000). Integrating CAA with textbooks
and question banks: Options for enhanc-,
ing learning. Computer Aided Assessment
(CAA2000), Leicestershire, UK.
Dalziel, 1. (2001). Enhancing Web-based learning
with CAA: Pedagogical and technical consider-
ations. Computer A idedAssessment (CAA200 1),
Leicestershire, UK.
Darbyshire, P. (2000). Distributed web-based as- .
signment management. In A. Aggarwal (Ed.),
Web based learning and teaching technologies:
Opportunities and challenges (pp. 198-215).
Idea, Group.
Copyright © 2007, IGI Global. Copying or distributing in print or electronic fonus witllont written pennission of IGI Global
is prohibited.
--------------_._----------------
I
)
. ,:/'
Jburnal 6fDistance Education Technologies, 5(3), 8-23, July-September 2007. 21
Dav.ies, P. (200l). Computei'aide<;l ass.esSI1)ent must
be more tLlan.multiple-choice tests for it to be
. ",academically credibre?CQmput~r Aided.Assess-
";li~ilt (CM20QI), Leicestershire, uk, ' "
, English, J., & Siviter, P. (2000). Experiencewithan ..
, autoinaticaIIyassessedcourse. In Proceedings
o/the Coriference on liltegrating Techno(ogy
,'into Computer ,Science Education (ITiCSE)
, ,(pp: 168~ 171 ).Helsinki, Finland.
Gayo, J.E.L., Moniles, J:M.G., Femandex; A.MA,
Sagastegui, H.C. (2003). A generio e-leaming
mUltiparadigm programming language system:
IDEFIX project. Technical Symposium on
Computer Science Education (pp. 391-395).
Reno, NY: ACM Press. '
HilI, T.G. (2003). MEAGER: Microsoft Excel au-
tomated grader. The Journal of Computing in
Sniall Colleges, 18(6), 151-164.
Huizinga, D. (2001). Identifying topics for instfuc~
tional improvement through on-line tracking
of programming assessment. In Proceedings
of the Conference on Integrating Tecll11010gy
into Computer Science Education (ITiCSE)
(pp. 129-132). Canterbury; UK.
TMS. (2000).IMS question & test interoperability
specification: A review. IMS Global Learning
Consortium. Retrieved March 8, 2007, from
http://v.'Ww.imsproject.org/question/whitepa-
per.pdf
IMS. (2002a). IMS 'question & test interoperability:
AS] XML binding specification. IMS Global
Learning Consortium. Retrieved March 8, 2007;
from http://www.imsproject.org .
IMS. (2002b). IMS question & test interoperability:
Results reporting XML binding speoification.
IMS Global Learning Consortium. Retrieved
March 8, 2007, from http://www.imsproject.
org
IMS. (2004). Directory of products and organisations
supporting IMS specifications. IMS GlobaL
Retrieved March 8, 2007, from http://www.
imsglobal.org/directldirectory.cfrn
[MS .. (2005). IMS Global Learning Consortium.
Retrieved March 8, 2007, from http://www.
imsglobaI.org
Jackson, D. (2000). A semi-automated approach
to online assessment. In Proceedings of the
,COIijerence' on. Integrating TeclmologY;into
Compute}' Science Education (ITiCSE) (pp.,.
, 164-16~):llelsinki"FiIlland. . . .".,
Jacobsen, M::, & Kreriler, R: (2000). Online testing'
: :: and gradirigusing WebCTin computer science~
In Prdceedings of the World Conference on the
WWW and Internet (pp. 263-268).
Jefferies, P., Constable, 1., et al. (2000): Computer
" aictedassessment using WebCT. Computer
AidedAssessment (CAA2000), Leicestershire,
UK.
Jones, D., & Behrens, S. (2003). OnliIw assiglm1ent
management: An evolution ary tale. InProceed~
ings of the Hawaiilnternational Conference 011
System Sciences, Waikoloa Village.
'. Jones, D., & Jamieson, B. (1997). Three generations
of online assignment management. In Proceed-
ings of the Australian Society for Computersr'n
Learning in Tertimy Education Conference (pp.
317-323). Perth, Australia.
Joy, M., & Luck, M. (1998). Effective electronic
marking for on-line assessment. InProceedings
of the Co}?ference on Integrating Techno! ogy
into Computer Science Education (ITiCSE)
(pp. 134-138). Dublin,Ireland.
Mason, D.v., & Woit, D.M. (1999). Providing
mark-up and feedback to students with online
marking. SIGCSE Technical Symposium on
Computer Science Education (pp. 3-6). New
Orleans, LA.
Mason, D.V., Woit, D., Abdullah, A., Barakat, Ho.,
Pires, C., D'Souza, M. (1999). Web-based
evaluation :for the convenience of s.tudents,
markers, and faculty. In Proceedings of the
NorthAmerican Web Conference, Fredericton,
Canada.,
Merat, F.L., & Chung, D. (1997). World Wide Web
approach to teaching microprocessors. ill Pro-'
ceedings of the Frontiers in Education Corifer- '
, ·'ence (pp.838-841). Stipes Publishing.
Mu1dner, M., & Currie, D. (1999). Techniques
to implement high speed', scalable. dynamic' '
on-line systems. In Proceedings of the World
Conference on the WWW and Internet (pp.
. 782-787);.
Pain, D., & Heron, J.L. (2003). WebCT and online
assessment: The bestthingsince SOAP? Journal
Copyright © 2007, IGI Global. Copying or distributing ill print or electronic fonlls without written pem1ission of IGI Global
is prohibited, '
:':
22 ,'Jou~nal bf Distance Education Technologies, 5(3), 8-23, July-September 2007
" 'of International Forum of Educational Teclmol- '
ogy & Sociel)1, 6(2),62-71.
: J;'ah~er; .r.; Williart1s, Ji., Dreiler,H: (2002). Auto-
, rnated essay grading system applied to a'first "
, year uiliversity subject. In/orming Science,
1222-1229. '
" peat,M., Franklin, S., & Lewis, A. (2001). A ~evie~
ofthe useof online self-assessment modflles to ' '
enharice stude11t, learning outcomes: Are they
worth the effOliofproduction. InProceedillgs of
the ASCILlTE2001 (pp. 137-140). Melbourne,
Australia.
Preston, .r., & Shackleford, R. (1999). Improving
online assessment: An investigation of existing
marking methodologies. In Proceedings of the
Conference on Integrating Technology into
Computer Science Education (lTiCSE) (pp.
29-32). Crocow,Poland. ' ,
Price, B., &PetTe, M. (1997). Teachingprognimming
through paperless assignments: An empirical
evaluation of instructor feedback. In Proceed-
ings of the Conference on Integrating Technol-
ogy into Computer Science Education (lTiCSE)
(pp. 94-99). Uppsala, Sweden.
Roantree, M., & Keyes, T.E. (1998). Automated
collection of coursework using the Web. In
Proceedings of the Conference on Integrating
Technology into Computer Science Education
(JTiCSE) (pp. 206-208). Dublin, Ireland.
Sheard, 1, & Carbone,A (2000). Providing support
for self-managed learning? In Proceedings of
the World COiiferelice on the WWWandInternet
2000 (pp.,482-488).
Thomas, P. (2000). Reducing the distance in dis-
tance education. Computer Aided Assessment
(CAA2000), Leicestershire, UK.
Trivedi, A., Kar, D.C., Patterns on-McNeill, H.
(2003 ).Automatic assignment management and
peer evaluation . .The Journal ofComp!/ting in
Small Colleges, 1,8(4),30-37.
White, I (2000). Online testing: The dog sat on my ,
keyboard. In Proceedings of the International,
, :CQnference on Technology in Collegiate Mathe,
eniatics, Atlanta, GA. ",
Woit, D., & Mason, D. (2000). Enhancing student
learning through online quizzes. SIGCSE. '
Technical Symposium on Computer Science,
Education (pp. 367-371). Austin, TX. '
ENDNOTES
Readers not familiar with XML are directed to
read thefoIIowing online resources: http://www.
xml.com, http://xml.coverpages.org/xml.html,
http://www.w3.org/XMLI, http://www.xmJ.
org.
The ;decv~r> element is used within the QTIASI
specification for declaring a scoring variable. It
allows the question author to define attributes for
ascoringvariable such as minimum,maximum,
and default values.
The element describes how to inter-
pret the meaning of scores assigned to scoring
variables.
is used within the QTI RR binding to
record the score achieved by a student as defined
by the element ofthe QTI ASI.
A CDATA node is a quoting mechanism within
XML syntax to allow the special meaning of
other XML characters to ~e escaped as part of
'an XML document.
The element provides a container ob-
ject for any content to be displayed. It allows
various data types such as plain or emphasised
, text, images; audio, videos, or applets.
The XML DOM API is a standard platfoml
independent, progranuning interface for ma-'
nipulating the content of XML documents in
computer memory.
Copyright ©'2007, IGl Global. Copying or distributing in print or electronic fonns without written pennission of IGI Global
is prohibited.
~~~~~~~~~~~~~~~~--~~------
..... Journal of Distance'Education Technologies, 5(3),8-23, July-September 2007 . 23
. Damien Clark coinmenced his academic research career in 2003 developing a parameter(sation enhance~ .
. ment.to the1MS QTI; resulting.inan honours equivalent thesis. Clark~' paper published in this editionqf .
JDET is hisjirst for an inte1'l7ational journill., He. c.ompleted a baohelor's degree in computer science fi'ol11 ...
Cei1tra1. Q~;eel1$lai1d Universit)" Austr.alia in 1995. He also h;lds a masier s degree in computer science
. fi'omtjleU!1i1i~rsi6) a/Southern Queensland, Australia. During his careel;' he worked as d: UNIX systems
administrator 'before tcdcii1g dposition as lecturel: bf"Ceiitl~al QueenslCl11d Univel'siiy in 2002. He teaches'
system ~dniinistration, computernell'.'Ol-kiTig, andirifo;·matiori·secllrity.· .
. .' , . .... '., . . ,
Penny Baillie-de BYI has been resecl1-ching in tlie ai-eo of online assessment management systems, ai·tificial
intelliieilc~ ~~d 'computer games programming since 1995. She has written d number of international'
corzjerence.'papers,journalpapers, book chapters and two books in these areas. During her careel; DI~
Baillie-de Byl has consulted as a computerprogrammel; computer games designel; website engineer,and
artificial intelligence designer.. DI: Baillie-de Byl curi-ently works as.a senior lecturer ill computer graph-
ics and computer games programming and manages a games research and development laboratory at the .
University of Southern Queensland. Australia.
Copyright © 2007, IGl Global. Copying or distributing in print or electronic fonns without written pennission of IGl Global
is prohibited.
i
.. ,r
.1
i
1