Microsoft Word - _1 page.doc XXV ISBS Symposium 2007, Ouro Preto – Brazil 398 RELIABILITY OF A DIGITAL METHOD TO DETERMINE FRONTAL AREA OF A CYCLIST Randall L. Jensen, Saravanan Balasubramani, Keith C. Burley, Daniel R. Kaukola, James A. LaChapelle and Ross Anderson* Department of Health, Physical Education, and Recreation Northern Michigan University, Marquette, MI, USA *Biomechanics Research Unit, University of Limerick, Limerick, Ireland Eight cyclists were photographed with a digital camera for three trials while positioned on their own bicycle wearing their helmet. The positions were different from each other and described as: hands on the brake hoods; hands below the curve of the brakehoods on the handlebars; and using aerobars. Twenty four trials were digitized by two different individuals three times to estimate the inter- and intra-rater reliability of the method. The Intraclass Correlation Coefficient (p < 0.05) value for the intra-rater (test-retest) reliability was ICC = .993; for inter-rater consistency the ICC = .976. There were significant differences (p < 0.05) between digitizers and between trials apparently due to a learning effect that disappeared by the third trial. Due to small differences between digitizers and trials, caution is recommended when considering use of this method. KEYWORDS: surface area, cycling, intra-rater reliability, inter-rater reliability INTRODUCTION: The effect of drag on the movement of a cyclist can be quite large and is related to the surface or frontal projection area (FPA) (Faria et al. 2005). However to determine this area is often time consuming and may require specialized and/or expensive equipment (Cappaert, 1998; Edwards & Byrnes, 2006). The other extreme is a gross oversimplification for example extrapolation based on height and weight (Radjvojevic et al. 1983). Both of these techniques have limited use in determining FPA for a cyclist; the former due to a large amount of time for repeated trials and calculations and/or high cost of equipment, while the latter results in a lack of precision limiting the ability to assess changes that have occurred following small alterations of position. Martin and colleagues (2006) have provided a field test to determine aerodynamic drag, however, this method requires six trials at various speeds, an accurate power meter of some sort (they recommend an SRM), and a straight flat section of road. Recently Swanton and coworkers (2006) presented a simplified method for determining FPA of a cyclist using digital photography and an image-processing package (Adobe Photoshop 7.0 - Adobe, San Jose, USA). This methodology greatly simplifies the process of determining surface area of an individual or object with minimal investment in equipment. However, there has been no information published on the reliability of this method for either test-retest or between digitizers. When implementing a new test, knowledge of the reliability of the test is important to ensure that the data are consistent and will allow for replication in future studies or within a repeated measures design (Morrow and Jackson, 1993). The purpose of the current study was to estimate the reliability of the method for both repeated trials by the same digitizer and between different digitizers. To accomplish this, two digitizers digitized the same photo two times and the inter- and intra-rater reliability was assessed. METHODS: Approval for the use of human subjects was obtained from the institution prior to commencing the study. Nine recreational or sub-elite cyclists volunteered to partake in all aspects of the study and gave written consent. Subjects wore their own helmet and were positioned on their own bicycle in one of three randomly ordered positions: 1) with the hands on top of the brake hoods (BH); 2) with the hands below the dropped, or curved, portion of the handlebars (DH); and 3) using clip-on triathlon bar extensions with the elbows resting on XXV ISBS Symposium 2007, Ouro Preto – Brazil 399 pads and hands extended to the end of the bar (AB). Previous research (Swanton et al. 2006) has shown these positions to vary from one another in FPA and thus each position was treated as a different sample. To calculate FPA the volunteers were asked to position themselves in each of three positions with the pedal cranks perpendicular with the floor. A digital image (5 mega pixel) was captured of the participant and a 51 X 76.3 cm calibration object (CO) from the frontal plane. The images were analysed in an image-processing package (Digitizer 1 used Adobe Photoshop 7.0 with Windows; while Digitizer 2 used Photoshop CS2 on a Macintosh - Adobe, San Jose, USA). Firstly, the rectangular marquee tool (crop tool in Macintosh) was used to extract the portion of the image containing the whole body of the cyclist and bicycle (Figure 1A). The portion of the image containing the CO was also extracted. The magnetic lasso tool was then used to extract the cyclist (or the CO) from the background of the image; this selection was then pasted as a new image with a white background (see Figure 1B). Following this, the image was converted to an image of two colours by reducing the contrast to minus 100% (image – adjustments – brightness/contrast – contrast). The resulting image (see Figure 1C) contained a representation of the FPA for that position. To calculate the actual FPA the image was represented as a histogram (containing the number of pixels of each colour). The CO image was processed in the same way. The area of the FPA could then be calculated in pixels and converted to m2 (Swanton et al. 2006). To estimate the intra-rater reliability of determining frontal projection area, the processing of each digital photo was performed three times. To estimate the inter-rater reliability two independent digitizers processed each digital photo. Frontal projection area data were then compared between and within digitizers. Figure 1: Image A (left), Image B (middle), and Image C (right) Statistical analyses were performed using the Statistical Program for the Social Sciences v.14.0 (SPSS, 2006). The intra-rater reliability, or test-retest consistency, of the method for determining frontal projection area for each of the three positions was estimated by XXV ISBS Symposium 2007, Ouro Preto – Brazil 400 determining the intra-class correlation coefficient using a two-way mixed effect model, with fixed effect for digitizer and random effect on trials (Morrow and Jackson, 1993). For the inter-rater reliability an intra-class correlation coefficient using a two-way mixed effect model, with fixed effect for trials and random effect for the digitizer was implemented. Repeated Measures Analysis of Variance was used to determine if there were significant differences between raters (inter-rater) and within raters (intra-rater) using p <0.05 for the test of significance. RESULTS: Intraclass Correlation testing the intra-rater reliability, or reliability of test-retest between trials resulted in an ICC = .996 (95% Confidence Interval of .993 to .997) for average measures and ICC = .987 for a single measure. However, despite the high ICC there was a significant difference (p < 0.05) between the trials (Mean ± SD) Trial 1 = 0.389 ± 0.038 m2; Trial 2 = 0.386 ± 0.037 m2; Trial 3 = 0.385 ± 0.038 m2. Bonferroni’s post hoc analysis indicated that Trial 1 was greater than the other two trials, which did not differ. The estimate of inter-rater reliability produced an ICC = .979 (95% Confidence Interval of .966 to .987) for average measures and ICC = .959 for a single measure. Similar to the intra- rater reliability there was a significant difference (p < 0.05) between the trials (Mean ± SD) Digitizer 1 = 0.380 ± 0.036 m2; Digitizer 2 = 0.394 ± 0.038 m2. DISCUSSION: The major findings of the current study were that high Intraclass Correlation Coefficients (> .95) were found in both intra-rater (test-retest) and inter-rater (between digitizers) estimations of reliability for the digital determination of frontal projection area proposed by Swanton et al. (2006). However, there were also significant differences between trials and between digitizers (p < 0.05). The presence of differences between digitizers is similar to previous findings for other digitization techniques (Winter, 1990). As noted by Winter, using different digitizers to analyze the same subject would not be recommended due to the differences that would be expected for repeated measurements. Although there was a statistically significant difference between trials, the magnitude of this difference was small, equaling 0.004 m2 or 1%. It is unclear to the authors what impact this size difference might have on comparisons of changes in body position, however this magnitude of error is smaller than many other types of measurement in sport science (Hopkins et al, 1999). Furthermore, Trial 1 differed from the latter trials, which did not differ from each other; indicating that there may be a learning effect in performing this technique. Further analyses that found the difference of Trial 1 from the latter trials occurred for Digitizer 1, who had no prior experience in the procedure, while Digitizer 2, who had over 10 years of experience in manipulation of digital photo data, displayed no differences across the trials. CONCLUSION: The current study found the estimated reliability of the method by Swanton et al. (2006) to determine the frontal projection area of a cyclist resulted in high Intraclass Correlations but also significant differences between trials and digitizers. Although there were differences between the trials, the magnitude was 1% which was considered small. The method is inexpensive and easy to use: but because of the small but significant differences, practitioners should consider whether this method may be useful. A consideration when using this technique is the experience of the digitizer, as a learning effect appears to exist. Also data from one cyclist should be analyzed by a single digitizer, as significant differences between digitizers indicate that similar findings would be unlikely. REFERENCES: Cappaert, J.M. (1998) Frontal surface area measurements in national calibre swimmers. Sports Engineering 1(1) 51. XXV ISBS Symposium 2007, Ouro Preto – Brazil 401 Edwards, A.G & Byrnes, W.C. (2007) Aerodynamic characteristics as determinants of the drafting effect in cyclists. Medicine and Science in Sports and Exercise, 39(1), 170-176. Faria, E.W, Parker, D.L & Faria, I.E. (2005) The science of cycling: factors affecting performance – part 2, Sports Medicine, 35(4), 313 – 337. Hopkins, W.G., Hawley, J.A. & Burke, L.M. (1999). Design and analysis of research on sport performance enhancement. Medicine and Science in Sports and Exercise, 31(3), 472-485. Martin, J.C., Gardner, A.S. & Barras, M, Martin DT (2006) Aerodynamic Drag Area of Cyclists Determined with Field-Based Measures. Sportscience, 10, 68-69 (sportsci.org/2006/jcm.htm). Morrow, J.R. Jr. & Jackson, A.W. (1993) How "significant" is your reliability? Research Quarterly for Exercise and Sport, 64(9): 352-355. Radjvojevic, L., Jovic, D. & Perunovic, D. (1983) Valuation of the methodological procedures for determination of the values for absolute surface area. Journal of Sports Medicine & Physical Fitness, 23(2) 148-154. Swanton, A., Shafat, A. & Anderson, R. (2006) Biomechanical and physiological characterization of four cycling positions. In Proceedings of the XXIV International Symposium of Biomechanics in Sports; (Schwameder, H, Strutzenberger, G, Fastenberger, V, Lindinger, S, Müller, S. editors) 855-858. Winter, D.A. (1990) Biomechanics and Motor Control of Human Movement, John Wiley Publishers. Acknowldgement This study was supported in part by a Northern Michigan University College of Professional Studies Grant.